Hugging Face Daily Papers · · 4 min read

SeePhys Pro: Diagnosing Modality Transfer and Blind-Training Effects in Multimodal RLVR for Physics Reasoning

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

This paper introduces a fine-grained benchmark for studying modality transfer in multimodal physics reasoning. By progressively moving key information from text to images, the paper shows that current MLLMs struggle to maintain consistent reasoning ability across modalities, especially when variable grounding relies on visual understanding. The work also reveals that multimodal RLVR can improve performance even under blind training, suggesting that many gains may come from residual textual cues rather than genuine visual reasoning.</p>\n","updatedAt":"2026-05-13T04:43:57.915Z","author":{"_id":"66a0c72a813431cd7aa0fdf6","avatarUrl":"/avatars/52a97401826f124030aadef094dc725f.svg","fullname":"Kun Xiang","name":"Kun-Xiang","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":5,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.8891568779945374},"editors":["Kun-Xiang"],"editorAvatarUrls":["/avatars/52a97401826f124030aadef094dc725f.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.09266","authors":[{"_id":"6a02c82eb823258e761236aa","name":"Kun Xiang","hidden":false},{"_id":"6a02c82eb823258e761236ab","name":"Terry Jingchen Zhang","hidden":false},{"_id":"6a02c82eb823258e761236ac","name":"Zirong Liu","hidden":false},{"_id":"6a02c82eb823258e761236ad","name":"Bokai Zhou","hidden":false},{"_id":"6a02c82eb823258e761236ae","name":"Yueling Tang","hidden":false},{"_id":"6a02c82eb823258e761236af","name":"Junjie Yu","hidden":false},{"_id":"6a02c82eb823258e761236b0","name":"Jiacong Lu","hidden":false},{"_id":"6a02c82eb823258e761236b1","name":"Shangrui Huang","hidden":false},{"_id":"6a02c82eb823258e761236b2","name":"Heng Li","hidden":false},{"_id":"6a02c82eb823258e761236b3","name":"Likui Zhang","hidden":false},{"_id":"6a02c82eb823258e761236b4","name":"Kunkun Liu","hidden":false},{"_id":"6a02c82eb823258e761236b5","name":"Changzheng Zhang","hidden":false},{"_id":"6a02c82eb823258e761236b6","name":"Yangle Fang","hidden":false},{"_id":"6a02c82eb823258e761236b7","name":"Boqiang Guo","hidden":false},{"_id":"6a02c82eb823258e761236b8","name":"Hui-Ling Zhen","hidden":false},{"_id":"6a02c82eb823258e761236b9","name":"Dandan Tu","hidden":false},{"_id":"6a02c82eb823258e761236ba","name":"Yinya Huang","hidden":false},{"_id":"6a02c82eb823258e761236bb","name":"Xiaodan Liang","hidden":false}],"publishedAt":"2026-05-10T00:00:00.000Z","submittedOnDailyAt":"2026-05-13T00:00:00.000Z","title":"SeePhys Pro: Diagnosing Modality Transfer and Blind-Training Effects in Multimodal RLVR for Physics Reasoning","submittedOnDailyBy":{"_id":"66a0c72a813431cd7aa0fdf6","avatarUrl":"/avatars/52a97401826f124030aadef094dc725f.svg","isPro":true,"fullname":"Kun Xiang","user":"Kun-Xiang","type":"user","name":"Kun-Xiang"},"summary":"We introduce SeePhys Pro, a fine-grained modality transfer benchmark that studies whether models preserve the same reasoning capability when critical information is progressively transferred from text to image. Unlike standard vision-essential benchmarks that evaluate a single input form, SeePhys Pro features four semantically aligned variants for each problem with progressively increasing visual elements. Our evaluation shows that current frontier models are far from representation-invariant reasoners: performance degrades on average as information moves from language to diagrams, with visual variable grounding as the most critical bottleneck. Motivated by this inference-time fragility, we further develop large training corpora for multimodal RLVR and use blind training as a diagnostic control, finding that RL with all training images masked can still improve performance on unmasked validation sets. To analyze this effect, text-deletion, image-mask-rate, and format-saturation controls suggest that such gains can arise from residual textual and distributional cues rather than valid visual evidence. Our results highlight the need to evaluate multimodal reasoning not only by final-answer accuracy, but also by robustness under modality transfer and by diagnostics that test whether improvements rely on task-critical visual evidence.","upvotes":11,"discussionId":"6a02c82eb823258e761236bc","projectPage":"https://seephyspro.github.io/","githubRepo":"https://github.com/AI4Phys/SeePhy-Pro","githubRepoAddedBy":"user","ai_summary":"SeePhys Pro benchmark reveals that current multimodal models struggle with representation-invariant reasoning when information shifts from text to visual formats, and demonstrates that blind training can improve performance through residual textual cues.","ai_keywords":["multimodal reasoning","representation-invariant reasoners","modality transfer","vision-essential benchmarks","multimodal RLVR","blind training","visual variable grounding","image-mask rate","text-deletion","format-saturation"],"githubStars":16},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"61b859ddbdf1fac5ed499992","avatarUrl":"/avatars/2387fb9b8a46840bfc75248462f0a410.svg","isPro":false,"fullname":"Jiaqi Chen","user":"judge","type":"user"},{"_id":"68f600f178705853dbf12414","avatarUrl":"/avatars/8e525b4f252d6c50920910fc7b2c9bd1.svg","isPro":false,"fullname":"Hanser-HSR","user":"Hanser-HSR","type":"user"},{"_id":"66ce8d7cbee2b9a93b643c77","avatarUrl":"/avatars/c363c9f583f66b08b0dcea77d8d94fd3.svg","isPro":false,"fullname":"ZehuaMa","user":"zoeloopy","type":"user"},{"_id":"6396cec95e909a55efb200d5","avatarUrl":"/avatars/247b2ff50542deb997f35bc68b569899.svg","isPro":false,"fullname":"wenyoupeng","user":"jirufengyu","type":"user"},{"_id":"67212dfc82eedb0dc0253c2b","avatarUrl":"/avatars/7c2abb0a8fb8ce5aa51c822427115c63.svg","isPro":false,"fullname":"Kuromitu","user":"KuromituSan","type":"user"},{"_id":"6a04053f068edd222c533b09","avatarUrl":"/avatars/e03511c0fdb4c425fa99b398ea598982.svg","isPro":false,"fullname":"Qinhe Liu","user":"QH0220","type":"user"},{"_id":"68ef2e096442f4c85c67c148","avatarUrl":"/avatars/a8e71879f914c051bfdce7e2e5510848.svg","isPro":false,"fullname":"yuelingtang","user":"ttlynne","type":"user"},{"_id":"6a041300afb8cb93b94b9cc9","avatarUrl":"/avatars/064afd8e55d2174b3102c93772feb406.svg","isPro":false,"fullname":"miaojun xu","user":"aprilxu","type":"user"},{"_id":"660387dd11dd40747bea4868","avatarUrl":"/avatars/fdc9a2ab7ee8b958bbf5dc23cb8ea335.svg","isPro":false,"fullname":"qingman","user":"wu578","type":"user"},{"_id":"658be8167f1e21412cc7db07","avatarUrl":"/avatars/57e822c76e08336f01fea770b6b15dc7.svg","isPro":false,"fullname":"Tsichun","user":"zijunwa","type":"user"},{"_id":"66a0c72a813431cd7aa0fdf6","avatarUrl":"/avatars/52a97401826f124030aadef094dc725f.svg","isPro":true,"fullname":"Kun Xiang","user":"Kun-Xiang","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.09266.md"}">
Papers
arxiv:2605.09266

SeePhys Pro: Diagnosing Modality Transfer and Blind-Training Effects in Multimodal RLVR for Physics Reasoning

Published on May 10
· Submitted by
Kun Xiang
on May 13
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

SeePhys Pro benchmark reveals that current multimodal models struggle with representation-invariant reasoning when information shifts from text to visual formats, and demonstrates that blind training can improve performance through residual textual cues.

AI-generated summary

We introduce SeePhys Pro, a fine-grained modality transfer benchmark that studies whether models preserve the same reasoning capability when critical information is progressively transferred from text to image. Unlike standard vision-essential benchmarks that evaluate a single input form, SeePhys Pro features four semantically aligned variants for each problem with progressively increasing visual elements. Our evaluation shows that current frontier models are far from representation-invariant reasoners: performance degrades on average as information moves from language to diagrams, with visual variable grounding as the most critical bottleneck. Motivated by this inference-time fragility, we further develop large training corpora for multimodal RLVR and use blind training as a diagnostic control, finding that RL with all training images masked can still improve performance on unmasked validation sets. To analyze this effect, text-deletion, image-mask-rate, and format-saturation controls suggest that such gains can arise from residual textual and distributional cues rather than valid visual evidence. Our results highlight the need to evaluate multimodal reasoning not only by final-answer accuracy, but also by robustness under modality transfer and by diagnostics that test whether improvements rely on task-critical visual evidence.

Community

This paper introduces a fine-grained benchmark for studying modality transfer in multimodal physics reasoning. By progressively moving key information from text to images, the paper shows that current MLLMs struggle to maintain consistent reasoning ability across modalities, especially when variable grounding relies on visual understanding. The work also reveals that multimodal RLVR can improve performance even under blind training, suggesting that many gains may come from residual textual cues rather than genuine visual reasoning.

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.09266
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.09266 in a model README.md to link it from this page.

Datasets citing this paper 4

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.09266 in a Space README.md to link it from this page.

Collections including this paper 1

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers