Hugging Face Daily Papers · · 4 min read

ATLAS: Agentic or Latent Visual Reasoning? One Word is Enough for Both

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

Project Page: <a href=\"https://atlas-oneword.github.io\" rel=\"nofollow\">https://atlas-oneword.github.io</a><br>Code: <a href=\"https://github.com/ZiyuGuo99/ATLAS\" rel=\"nofollow\">https://github.com/ZiyuGuo99/ATLAS</a></p>\n","updatedAt":"2026-05-15T04:38:06.126Z","author":{"_id":"647d9ab61a1fcad2fdbf2d3d","avatarUrl":"/avatars/48c8aeae8979d2c87df8bde922437d62.svg","fullname":"Ziyu Guo","name":"ZiyuG","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":13,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7460803389549255},"editors":["ZiyuG"],"editorAvatarUrls":["/avatars/48c8aeae8979d2c87df8bde922437d62.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.15198","authors":[{"_id":"6a067fbab1a8cbabc9f0985a","name":"Ziyu Guo","hidden":false},{"_id":"6a067fbab1a8cbabc9f0985b","name":"Rain Liu","hidden":false},{"_id":"6a067fbab1a8cbabc9f0985c","name":"Xinyan Chen","hidden":false},{"_id":"6a067fbab1a8cbabc9f0985d","name":"Pheng-Ann Heng","hidden":false}],"publishedAt":"2026-05-14T00:00:00.000Z","submittedOnDailyAt":"2026-05-15T00:00:00.000Z","title":"ATLAS: Agentic or Latent Visual Reasoning? One Word is Enough for Both","submittedOnDailyBy":{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user","name":"taesiri"},"summary":"Visual reasoning, often interleaved with intermediate visual states, has emerged as a promising direction in the field. A straightforward approach is to directly generate images via unified models during reasoning, but this is computationally expensive and architecturally non-trivial. Recent alternatives include agentic reasoning through code or tool calls, and latent reasoning with learnable hidden embeddings. However, agentic methods incur context-switching latency from external execution, while latent methods lack task generalization and are difficult to train with autoregressive parallelization. To combine their strengths while mitigating their limitations, we propose ATLAS, a framework in which a single discrete 'word', termed as a functional token, serves both as an agentic operation and a latent visual reasoning unit. Each functional token is associated with an internalized visual operation, yet requires no visual supervision and remains a standard token in the tokenizer vocabulary, which can be generated via next-token prediction. This design avoids verbose intermediate visual content generation, while preserving compatibility with the vanilla scalable SFT and RL training, without architectural or methodological modifications. To further address the sparsity of functional tokens during RL, we introduce Latent-Anchored GRPO (LA-GRPO), which stabilizes the training by anchoring functional tokens with a statically weighted auxiliary objective, providing stronger gradient updates. Extensive experiments and analyses demonstrate that ATLAS achieves superior performance on challenging benchmarks while maintaining clear interpretability. We hope ATLAS offers a new paradigm inspiring future visual reasoning research.","upvotes":16,"discussionId":"6a067fbab1a8cbabc9f0985e","projectPage":"https://atlas-oneword.github.io/","ai_summary":"ATLAS presents a visual reasoning framework that combines agentic operations and latent representations using functional tokens, enabling efficient training and improved performance on complex benchmarks.","ai_keywords":["visual reasoning","functional tokens","agentic reasoning","latent visual reasoning","next-token prediction","scalable SFT","RL training","autoregressive parallelization","Latent-Anchored GRPO","auxiliary objective","gradient updates"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6039478ab3ecf716b1a5fd4d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg","isPro":true,"fullname":"taesiri","user":"taesiri","type":"user"},{"_id":"69cb8bdfb0d3ea16d3b6f8d3","avatarUrl":"/avatars/0faace0eb8ff3ca0f64c89b364b0409c.svg","isPro":false,"fullname":"a0331","user":"a0331","type":"user"},{"_id":"645b8b2687c79b6ec0bb3b7a","avatarUrl":"/avatars/00a9db32a42dc950112bf2593bb109cb.svg","isPro":false,"fullname":"Renrui","user":"ZrrSkywalker","type":"user"},{"_id":"647d9ab61a1fcad2fdbf2d3d","avatarUrl":"/avatars/48c8aeae8979d2c87df8bde922437d62.svg","isPro":true,"fullname":"Ziyu Guo","user":"ZiyuG","type":"user"},{"_id":"64cb54da1af278541d663708","avatarUrl":"/avatars/c44507cc92bb2e83154bad31b90ce6dd.svg","isPro":false,"fullname":"Xiaoye Qu","user":"Xiaoye08","type":"user"},{"_id":"6552f1ad5d55ccb20e9142a0","avatarUrl":"/avatars/0e3e80cba64b5ae0bc5638694ac33dbf.svg","isPro":false,"fullname":"Ivan Tang","user":"IvanTang","type":"user"},{"_id":"6628899375e847e38a16659d","avatarUrl":"/avatars/c56f37438a092c67b38a90236b200b26.svg","isPro":false,"fullname":"jiaming","user":"benmixi","type":"user"},{"_id":"668d4f97fecd7e04837f65d3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/668d4f97fecd7e04837f65d3/dF0NxmelIbWR7N8S4p5Pe.jpeg","isPro":false,"fullname":"Gu Chenyang","user":"GayStarc","type":"user"},{"_id":"65d2ac3df90e9b0aafdbf503","avatarUrl":"/avatars/7b4eecb482592d110c270304f2daf6aa.svg","isPro":false,"fullname":"王印熹","user":"GemingOne","type":"user"},{"_id":"672192eae4231d0e0bd0a9ba","avatarUrl":"/avatars/9d545e7e98d4b34d19c7ccc40f2ae7b7.svg","isPro":false,"fullname":"Zhuoyang Liu","user":"miniFranka","type":"user"},{"_id":"6a0696c821b96d02f98c6393","avatarUrl":"/avatars/0f5e4bf757d123d7c8768b9487e35f70.svg","isPro":false,"fullname":"xhfang","user":"fangxh1997","type":"user"},{"_id":"620783f24e28382272337ba4","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/620783f24e28382272337ba4/zkUveQPNiDfYjgGhuFErj.jpeg","isPro":false,"fullname":"GuoLiangTang","user":"Tommy930","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.15198.md"}">
Papers
arxiv:2605.15198

ATLAS: Agentic or Latent Visual Reasoning? One Word is Enough for Both

Published on May 14
· Submitted by
taesiri
on May 15
Authors:
,
,
,

Abstract

ATLAS presents a visual reasoning framework that combines agentic operations and latent representations using functional tokens, enabling efficient training and improved performance on complex benchmarks.

AI-generated summary

Visual reasoning, often interleaved with intermediate visual states, has emerged as a promising direction in the field. A straightforward approach is to directly generate images via unified models during reasoning, but this is computationally expensive and architecturally non-trivial. Recent alternatives include agentic reasoning through code or tool calls, and latent reasoning with learnable hidden embeddings. However, agentic methods incur context-switching latency from external execution, while latent methods lack task generalization and are difficult to train with autoregressive parallelization. To combine their strengths while mitigating their limitations, we propose ATLAS, a framework in which a single discrete 'word', termed as a functional token, serves both as an agentic operation and a latent visual reasoning unit. Each functional token is associated with an internalized visual operation, yet requires no visual supervision and remains a standard token in the tokenizer vocabulary, which can be generated via next-token prediction. This design avoids verbose intermediate visual content generation, while preserving compatibility with the vanilla scalable SFT and RL training, without architectural or methodological modifications. To further address the sparsity of functional tokens during RL, we introduce Latent-Anchored GRPO (LA-GRPO), which stabilizes the training by anchoring functional tokens with a statically weighted auxiliary objective, providing stronger gradient updates. Extensive experiments and analyses demonstrate that ATLAS achieves superior performance on challenging benchmarks while maintaining clear interpretability. We hope ATLAS offers a new paradigm inspiring future visual reasoning research.

Community

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.15198
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.15198 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.15198 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.15198 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers