AlphaGRPO enables multimodal generation RL training across text and image generation for AR-Diffusion-native unified multimodal models, such as BAGEL. It supports training on reasoning text-to-image generation and self-reflective refinement.</p>\n","updatedAt":"2026-05-13T04:13:03.501Z","author":{"_id":"630f0542cc8ed75decb03b68","avatarUrl":"/avatars/f76c3603b2700591f33a8a931f7ca664.svg","fullname":"huangrh9","name":"huangrh9","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8319038152694702},"editors":["huangrh9"],"editorAvatarUrls":["/avatars/f76c3603b2700591f33a8a931f7ca664.svg"],"reactions":[],"isReport":false}},{"id":"6a04852abfb2c3c382f2d02d","author":{"_id":"665eab8bf8cb81b0a4915cdc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/S2sg92CriAvhSJ4U1b4F2.png","fullname":"Trump","name":"willaihappy","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2026-05-13T14:05:30.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"solid work ! 👍️","html":"<p>solid work ! 👍️</p>\n","updatedAt":"2026-05-13T14:05:30.232Z","author":{"_id":"665eab8bf8cb81b0a4915cdc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/S2sg92CriAvhSJ4U1b4F2.png","fullname":"Trump","name":"willaihappy","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.5638933777809143},"editors":["willaihappy"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/S2sg92CriAvhSJ4U1b4F2.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.12495","authors":[{"_id":"6a03fa2886b054ce2fa40efc","name":"Runhui Huang","hidden":false},{"_id":"6a03fa2886b054ce2fa40efd","name":"Jie Wu","hidden":false},{"_id":"6a03fa2886b054ce2fa40efe","name":"Rui Yang","hidden":false},{"_id":"6a03fa2886b054ce2fa40eff","name":"Zhe Liu","hidden":false},{"_id":"6a03fa2886b054ce2fa40f00","name":"Hengshuang Zhao","hidden":false}],"publishedAt":"2026-05-12T00:00:00.000Z","submittedOnDailyAt":"2026-05-13T00:00:00.000Z","title":"AlphaGRPO: Unlocking Self-Reflective Multimodal Generation in UMMs via Decompositional Verifiable Reward","submittedOnDailyBy":{"_id":"630f0542cc8ed75decb03b68","avatarUrl":"/avatars/f76c3603b2700591f33a8a931f7ca664.svg","isPro":false,"fullname":"huangrh9","user":"huangrh9","type":"user","name":"huangrh9"},"summary":"In this paper, we propose AlphaGRPO, a novel framework that applies Group Relative Policy Optimization (GRPO) to AR-Diffusion Unified Multimodal Models (UMMs) to enhance multimodal generation capabilities without an additional cold-start stage. Our approach unlocks the model's intrinsic potential to perform advanced reasoning tasks: Reasoning Text-to-Image Generation, where the model actively infers implicit user intents, and Self-Reflective Refinement, where it autonomously diagnoses and corrects misalignments in generated outputs. To address the challenge of providing stable supervision for real-world multimodal generation, we introduce the Decompositional Verifiable Reward (DVReward). Unlike holistic scalar rewards, DVReward utilizes an LLM to decompose complex user requests into atomic, verifiable semantic and quality questions, which are then evaluated by a general MLLM to provide reliable and interpretable feedback. Extensive experiments demonstrate that AlphaGRPO yields robust improvements across multimodal generation benchmarks, including GenEval, TIIF-Bench, DPG-Bench and WISE, while also achieving significant gains in editing tasks on GEdit without training on editing tasks. These results validate that our self-reflective reinforcement approach effectively leverages inherent understanding to guide high-fidelity generation. Project page: https://huangrh99.github.io/AlphaGRPO/","upvotes":27,"discussionId":"6a03fa2886b054ce2fa40f01","projectPage":"https://huangrh99.github.io/AlphaGRPO/","githubRepo":"https://github.com/huangrh99/AlphaGRPO","githubRepoAddedBy":"user","ai_summary":"AlphaGRPO enhances multimodal generation by applying Group Relative Policy Optimization to AR-Diffusion Unified Multimodal Models through self-reflective refinement and decompositional verifiable reward mechanisms.","ai_keywords":["Group Relative Policy Optimization","AR-Diffusion Unified Multimodal Models","Reasoning Text-to-Image Generation","Self-Reflective Refinement","Decompositional Verifiable Reward","LLM","MLLM","GenEval","TIIF-Bench","DPG-Bench","WISE","GEdit"],"githubStars":31},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"630f0542cc8ed75decb03b68","avatarUrl":"/avatars/f76c3603b2700591f33a8a931f7ca664.svg","isPro":false,"fullname":"huangrh9","user":"huangrh9","type":"user"},{"_id":"6507f434dacc94cd6c2a8700","avatarUrl":"/avatars/71bf2dff7119f9fd2c01ddf925229239.svg","isPro":false,"fullname":"Minbin Huang","user":"centaurus-alpha","type":"user"},{"_id":"66a0c72a813431cd7aa0fdf6","avatarUrl":"/avatars/52a97401826f124030aadef094dc725f.svg","isPro":true,"fullname":"Kun Xiang","user":"Kun-Xiang","type":"user"},{"_id":"6396cec95e909a55efb200d5","avatarUrl":"/avatars/247b2ff50542deb997f35bc68b569899.svg","isPro":false,"fullname":"wenyoupeng","user":"jirufengyu","type":"user"},{"_id":"63425d394c9a81858b36aeb5","avatarUrl":"/avatars/511ad6a75bd1c10fc510ef527e7f8e5b.svg","isPro":false,"fullname":"Tianyu Huang","user":"tyhuang","type":"user"},{"_id":"641d6c27c5f150c3af0bf879","avatarUrl":"/avatars/39808793703406b4fe997e351b6a6bfb.svg","isPro":false,"fullname":"Ray Yang","user":"rayruiyang","type":"user"},{"_id":"63aaf2a2a4bdd629b7eb2b5b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/63aaf2a2a4bdd629b7eb2b5b/WOa3nAUNy5D3MsFUV9B8Z.jpeg","isPro":false,"fullname":"Junyi Li","user":"ProvenceStar","type":"user"},{"_id":"69b7086b18fbee18c240d969","avatarUrl":"/avatars/4cdae8d6bd01a5fd0296f32ad94ec145.svg","isPro":false,"fullname":"Prince Mudgal","user":"PM-1801","type":"user"},{"_id":"65672d7947b037bd21fe1c35","avatarUrl":"/avatars/6e34cc50f1776354d80ee18d6bb9f542.svg","isPro":false,"fullname":"Yixing","user":"yxlao","type":"user"},{"_id":"67d39e72df34ad328c20f0c1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/qQBezE-XbMyow_gbtBvXA.png","isPro":false,"fullname":"Ruizhe Zhong","user":"Yukino271828","type":"user"},{"_id":"6825a649f7587837c603ffd6","avatarUrl":"/avatars/93e6adb1234d3e873957961a3a3a0d28.svg","isPro":false,"fullname":"bigpie","user":"nacjadcans","type":"user"},{"_id":"63477c266f8773f2a28dada5","avatarUrl":"/avatars/8b9615ba9150231f4ffe706c53abc200.svg","isPro":false,"fullname":"songsongsong","user":"songsongsong","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.12495.md"}">
AlphaGRPO: Unlocking Self-Reflective Multimodal Generation in UMMs via Decompositional Verifiable Reward
Abstract
AlphaGRPO enhances multimodal generation by applying Group Relative Policy Optimization to AR-Diffusion Unified Multimodal Models through self-reflective refinement and decompositional verifiable reward mechanisms.
AI-generated summary
In this paper, we propose AlphaGRPO, a novel framework that applies Group Relative Policy Optimization (GRPO) to AR-Diffusion Unified Multimodal Models (UMMs) to enhance multimodal generation capabilities without an additional cold-start stage. Our approach unlocks the model's intrinsic potential to perform advanced reasoning tasks: Reasoning Text-to-Image Generation, where the model actively infers implicit user intents, and Self-Reflective Refinement, where it autonomously diagnoses and corrects misalignments in generated outputs. To address the challenge of providing stable supervision for real-world multimodal generation, we introduce the Decompositional Verifiable Reward (DVReward). Unlike holistic scalar rewards, DVReward utilizes an LLM to decompose complex user requests into atomic, verifiable semantic and quality questions, which are then evaluated by a general MLLM to provide reliable and interpretable feedback. Extensive experiments demonstrate that AlphaGRPO yields robust improvements across multimodal generation benchmarks, including GenEval, TIIF-Bench, DPG-Bench and WISE, while also achieving significant gains in editing tasks on GEdit without training on editing tasks. These results validate that our self-reflective reinforcement approach effectively leverages inherent understanding to guide high-fidelity generation. Project page: https://huangrh99.github.io/AlphaGRPO/
Community
AlphaGRPO enables multimodal generation RL training across text and image generation for AR-Diffusion-native unified multimodal models, such as BAGEL. It supports training on reasoning text-to-image generation and self-reflective refinement.
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.12495 in a model README.md to link it from this page.
Cite arxiv.org/abs/2605.12495 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.12495 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.