Hugging Face Daily Papers · · 3 min read

PanoWorld: Towards Spatial Supersensing in 360^circ Panorama World

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

PanoWorld</p>\n","updatedAt":"2026-05-15T05:29:14.116Z","author":{"_id":"644a1b6401e18bf93a6f45c1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/644a1b6401e18bf93a6f45c1/P0i_CgCrIzOS2tYRlxoE9.png","fullname":"xichen","name":"xichenhku","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":46,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.3865225315093994},"editors":["xichenhku"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/644a1b6401e18bf93a6f45c1/P0i_CgCrIzOS2tYRlxoE9.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.13169","authors":[{"_id":"6a06aecbb1a8cbabc9f099f9","name":"Changpeng Wang","hidden":false},{"_id":"6a06aecbb1a8cbabc9f099fa","name":"Xin Lin","hidden":false},{"_id":"6a06aecbb1a8cbabc9f099fb","name":"Junhan Liu","hidden":false},{"_id":"6a06aecbb1a8cbabc9f099fc","name":"Yuheng Liu","hidden":false},{"_id":"6a06aecbb1a8cbabc9f099fd","name":"Zhen Wang","hidden":false},{"_id":"6a06aecbb1a8cbabc9f099fe","name":"Donglian Qi","hidden":false},{"_id":"6a06aecbb1a8cbabc9f099ff","name":"Yunfeng Yan","hidden":false},{"_id":"6a06aecbb1a8cbabc9f09a00","name":"Xi Chen","hidden":false}],"publishedAt":"2026-05-13T00:00:00.000Z","submittedOnDailyAt":"2026-05-15T00:00:00.000Z","title":"PanoWorld: Towards Spatial Supersensing in 360^circ Panorama World","submittedOnDailyBy":{"_id":"644a1b6401e18bf93a6f45c1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/644a1b6401e18bf93a6f45c1/P0i_CgCrIzOS2tYRlxoE9.png","isPro":false,"fullname":"xichen","user":"xichenhku","type":"user","name":"xichenhku"},"summary":"Multimodal large laboratory models (MLLMs) still struggle with spatial understanding under the dominant perspective-image paradigm, which inherits the narrow field of view of human-like perception. For navigation, robotic search, and 3D scene understanding, 360-degree panoramic sensing offers a form of supersensing by capturing the entire surrounding environment at once. However, existing MLLM pipelines typically decompose panoramas into multiple perspective views, leaving the spherical structure of equirectangular projection (ERP) largely implicit. In this paper, we study pano-native understanding, which requires an MLLM to reason over an ERP panorama as a continuous, observer-centered space. To this end, we first define the key abilities for pano-native understanding, including semantic anchoring, spherical localization, reference-frame transformation, and depth-aware 3D spatial reasoning. We then build a large-scale metadata construction pipeline that converts mixed-source ERP panoramas into geometry-aware, language-grounded, and depth-aware supervision, and instantiate these signals as capability-aligned instruction tuning data. On the model side, we introduce PanoWorld with Spherical Spatial Cross-Attention, which injects spherical geometry into the visual stream. We further construct PanoSpace-Bench, a diagnostic benchmark for evaluating ERP-native spatial reasoning. Experiments show that PanoWorld substantially outperforms both proprietary and open-source baselines on PanoSpace-Bench, H* Bench, and R2R-CE Val-Unseen benchmarks. These results demonstrate that robust panoramic reasoning requires dedicated pano-native supervision and geometry-aware model adaptation. All source code and proposed data will be publicly released.","upvotes":7,"discussionId":"6a06aecbb1a8cbabc9f09a01","ai_summary":"PanoWorld with spherical spatial cross-attention enables panoramic reasoning by leveraging equirectangular projection structure and geometry-aware supervision.","ai_keywords":["multimodal large laboratory models","equirectangular projection","panoramic sensing","spherical spatial cross-attention","geometry-aware","language-grounded","depth-aware supervision","instruction tuning","diagnostic benchmark","spatial reasoning"],"organization":{"_id":"61bac2af530e5c78d7b99667","name":"zju","fullname":"Zhejiang University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e1058e9fcf41d740b69966d/7G1xjlxwCdMEmKcxNR0n5.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"644a1b6401e18bf93a6f45c1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/644a1b6401e18bf93a6f45c1/P0i_CgCrIzOS2tYRlxoE9.png","isPro":false,"fullname":"xichen","user":"xichenhku","type":"user"},{"_id":"67501868f1e386bac4fe218c","avatarUrl":"/avatars/7feaf72a3d6897bd4348fcbe938005a2.svg","isPro":false,"fullname":"Changpeng Wang","user":"wcccp","type":"user"},{"_id":"6a06b6c6c444268fe5a4e2d9","avatarUrl":"/avatars/7b668c1007dc57a99fcab216caaac11b.svg","isPro":false,"fullname":"l","user":"love611","type":"user"},{"_id":"6a06bcc294822ad3a38ca943","avatarUrl":"/avatars/7e5684cb5c41ddeb6911c03e4bb4f0ef.svg","isPro":false,"fullname":"yitongji","user":"jyt123","type":"user"},{"_id":"6407e5294edf9f5c4fd32228","avatarUrl":"/avatars/8e2d55460e9fe9c426eb552baf4b2cb0.svg","isPro":false,"fullname":"Stoney Kang","user":"sikang99","type":"user"},{"_id":"63aaf2a2a4bdd629b7eb2b5b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63aaf2a2a4bdd629b7eb2b5b/WOa3nAUNy5D3MsFUV9B8Z.jpeg","isPro":false,"fullname":"Junyi Li","user":"ProvenceStar","type":"user"},{"_id":"687f0efc664c6265a6fa37ee","avatarUrl":"/avatars/493ce89756b350646107b10647b4d599.svg","isPro":false,"fullname":"Kehan Lan","user":"lannn2333","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"61bac2af530e5c78d7b99667","name":"zju","fullname":"Zhejiang University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/5e1058e9fcf41d740b69966d/7G1xjlxwCdMEmKcxNR0n5.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.13169.md"}">
Papers
arxiv:2605.13169

PanoWorld: Towards Spatial Supersensing in 360^circ Panorama World

Published on May 13
· Submitted by
xichen
on May 15
Authors:
,
,
,
,
,
,
,

Abstract

PanoWorld with spherical spatial cross-attention enables panoramic reasoning by leveraging equirectangular projection structure and geometry-aware supervision.

AI-generated summary

Multimodal large laboratory models (MLLMs) still struggle with spatial understanding under the dominant perspective-image paradigm, which inherits the narrow field of view of human-like perception. For navigation, robotic search, and 3D scene understanding, 360-degree panoramic sensing offers a form of supersensing by capturing the entire surrounding environment at once. However, existing MLLM pipelines typically decompose panoramas into multiple perspective views, leaving the spherical structure of equirectangular projection (ERP) largely implicit. In this paper, we study pano-native understanding, which requires an MLLM to reason over an ERP panorama as a continuous, observer-centered space. To this end, we first define the key abilities for pano-native understanding, including semantic anchoring, spherical localization, reference-frame transformation, and depth-aware 3D spatial reasoning. We then build a large-scale metadata construction pipeline that converts mixed-source ERP panoramas into geometry-aware, language-grounded, and depth-aware supervision, and instantiate these signals as capability-aligned instruction tuning data. On the model side, we introduce PanoWorld with Spherical Spatial Cross-Attention, which injects spherical geometry into the visual stream. We further construct PanoSpace-Bench, a diagnostic benchmark for evaluating ERP-native spatial reasoning. Experiments show that PanoWorld substantially outperforms both proprietary and open-source baselines on PanoSpace-Bench, H* Bench, and R2R-CE Val-Unseen benchmarks. These results demonstrate that robust panoramic reasoning requires dedicated pano-native supervision and geometry-aware model adaptation. All source code and proposed data will be publicly released.

Community

Paper submitter about 20 hours ago

PanoWorld

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.13169
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.13169 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.13169 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.13169 in a Space README.md to link it from this page.

Collections including this paper 1

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers