Existing World Action Models (WAMs) largely miss this reciprocity, treating world prediction and action generation as either isolated parallel branches or rigid predict-then-plan pipelines. We formalize this perspective as World-Action Interactive Models (WAIMs), and instantiate it in autonomous driving with DAWN (Denoising Actions and World iNteractive model), a simple yet strong latent generative baseline. DAWN operates in a compact semantic latent space and couples a World Predictor with a World-Conditioned Action Denoiser: the predicted world hypothesis conditions action denoising, while the denoised action hypothesis is fed back to update the world prediction, so that both are recursively refined during inference. Rather than eliminating test-time world evolution altogether or rolling out the full future in pixel space, DAWN performs a short explicit latent rollout that is sufficient to support long-horizon trajectory generation in complex interactive scenes.</p>\n","updatedAt":"2026-05-14T01:51:13.305Z","author":{"_id":"671618d2c6f6570d4f513004","avatarUrl":"/avatars/6ac6577d110731a1e8e3d56b9018fc86.svg","fullname":"LiangYao","name":"1e12Leon","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8346880078315735},"editors":["1e12Leon"],"editorAvatarUrls":["/avatars/6ac6577d110731a1e8e3d56b9018fc86.svg"],"reactions":[{"reaction":"🚀","users":["Ccshidashuaige","Popeyepeng","0w0h0y","coffee666","acthuan","ReBoRn8888","Cosmo1210","cxlz64"],"count":8},{"reaction":"🤗","users":["1e12Leon","Popeyepeng","coffee666","Ccshidashuaige","acthuan","Cosmo1210","cxlz64"],"count":7},{"reaction":"👍","users":["Popeyepeng","coffee666","Ccshidashuaige","acthuan","Cosmo1210","cxlz64"],"count":6}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.11550","authors":[{"_id":"6a03d76286b054ce2fa40cf7","name":"Hongbo Lu","hidden":false},{"_id":"6a03d76286b054ce2fa40cf8","name":"Liang Yao","hidden":false},{"_id":"6a03d76286b054ce2fa40cf9","name":"Chenghao He","hidden":false},{"_id":"6a03d76286b054ce2fa40cfa","name":"Haoyu Wang","hidden":false},{"_id":"6a03d76286b054ce2fa40cfb","name":"Xiang Gu","hidden":false},{"_id":"6a03d76286b054ce2fa40cfc","name":"Xianfei Li","hidden":false},{"_id":"6a03d76286b054ce2fa40cfd","name":"Wenlong Liao","hidden":false},{"_id":"6a03d76286b054ce2fa40cfe","name":"Tao He","hidden":false},{"_id":"6a03d76286b054ce2fa40cff","name":"Pai Peng","hidden":false}],"publishedAt":"2026-05-12T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"The DAWN of World-Action Interactive Models","submittedOnDailyBy":{"_id":"671618d2c6f6570d4f513004","avatarUrl":"/avatars/6ac6577d110731a1e8e3d56b9018fc86.svg","isPro":false,"fullname":"LiangYao","user":"1e12Leon","type":"user","name":"1e12Leon"},"summary":"A plausible scene evolution depends on the maneuver being considered, while a good maneuver depends on how the scene may evolve. Existing World Action Models (WAMs) largely miss this reciprocity, treating world prediction and action generation as either isolated parallel branches or rigid predict-then-plan pipelines. We formalize this perspective as World-Action Interactive Models (WAIMs), and instantiate it in autonomous driving with DAWN (Denoising Actions and World iNteractive model), a simple yet strong latent generative baseline. DAWN operates in a compact semantic latent space and couples a World Predictor with a World-Conditioned Action Denoiser: the predicted world hypothesis conditions action denoising, while the denoised action hypothesis is fed back to update the world prediction, so that both are recursively refined during inference. Rather than eliminating test-time world evolution altogether or rolling out the full future in pixel space, DAWN performs a short explicit latent rollout that is sufficient to support long-horizon trajectory generation in complex interactive scenes. Experiments show that DAWN achieves strong planning performance and favorable safety-related results across multiple autonomous driving benchmarks. More broadly, our results suggest that interactive world-action generation is a principled path toward truly actionable world models.","upvotes":15,"discussionId":"6a03d76386b054ce2fa40d00","projectPage":"https://cowarobot-ai.github.io/","githubRepo":"https://github.com/COOWAI/DAWN","githubRepoAddedBy":"user","ai_summary":"World-Action Interactive Models (WAIMs) jointly model scene evolution and actions through recursive refinement, enabling effective long-horizon planning in autonomous driving scenarios.","ai_keywords":["World Action Models","World-Action Interactive Models","DAWN","latent generative baseline","semantic latent space","World Predictor","World-Conditioned Action Denoiser","recursive refinement","long-horizon trajectory generation","autonomous driving benchmarks"],"githubStars":22,"organization":{"_id":"6a056ac51ccc9ecd592d8241","name":"COWARobot","fullname":"COWARobot","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/671618d2c6f6570d4f513004/Vc-gBeokccKOOhMP1GBsZ.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"671618d2c6f6570d4f513004","avatarUrl":"/avatars/6ac6577d110731a1e8e3d56b9018fc86.svg","isPro":false,"fullname":"LiangYao","user":"1e12Leon","type":"user"},{"_id":"67a5dadbe7798ca5b74fba07","avatarUrl":"/avatars/7c7c99ed15f63d9bb22fb55767fa7929.svg","isPro":false,"fullname":"chenghao.he","user":"Ccshidashuaige","type":"user"},{"_id":"67f10d6a84486b4ad2f5801f","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/mD2B9tGzMi_JT8j4RDP1E.png","isPro":false,"fullname":"Ziyun Chen","user":"saoqiczy","type":"user"},{"_id":"67b6f6fa1651277fa0ce8b96","avatarUrl":"/avatars/b41921d4dadeff84356e7e1b25b6099e.svg","isPro":false,"fullname":"Pai Peng","user":"Popeyepeng","type":"user"},{"_id":"6a0576273fcbc936eefca756","avatarUrl":"/avatars/352b39954ffa3d161d0ec2b6591a9c19.svg","isPro":false,"fullname":"Haoyu Wang","user":"0w0h0y","type":"user"},{"_id":"6625319be9c66073a4583589","avatarUrl":"/avatars/6452fab90ba44f131f069deee33c1552.svg","isPro":false,"fullname":"kwongfei","user":"coffee666","type":"user"},{"_id":"67d7f7cc45debae2b2eb9ad9","avatarUrl":"/avatars/9abce596eaf3b1b52cd694dba27ef047.svg","isPro":false,"fullname":"zhanghuan","user":"acthuan","type":"user"},{"_id":"67a4ad31522cb52117e81f08","avatarUrl":"/avatars/643fd04189318f9446d4237a5b0422bf.svg","isPro":false,"fullname":"hanhao","user":"cowa1997","type":"user"},{"_id":"646845b28334813a7ae83e14","avatarUrl":"/avatars/264dfbcdc2ebddd7aba5d5c825dd5bef.svg","isPro":false,"fullname":"Reborn","user":"ReBoRn8888","type":"user"},{"_id":"6a057f5254ba3c3be9003345","avatarUrl":"/avatars/64380875a35886ad12962042a485ba87.svg","isPro":false,"fullname":"CosmoXu","user":"Cosmo1210","type":"user"},{"_id":"67c8191e5999e7df91a4696b","avatarUrl":"/avatars/95efe5b7392b47d5661c75b35db8f053.svg","isPro":false,"fullname":"lll","user":"hahayaayaa","type":"user"},{"_id":"6a0588dd891abe09465f1c44","avatarUrl":"/avatars/14297861e44d213dd0b3327ae0084174.svg","isPro":false,"fullname":"cxlz64","user":"cxlz64","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"6a056ac51ccc9ecd592d8241","name":"COWARobot","fullname":"COWARobot","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/671618d2c6f6570d4f513004/Vc-gBeokccKOOhMP1GBsZ.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.11550.md"}">
The DAWN of World-Action Interactive Models
Abstract
World-Action Interactive Models (WAIMs) jointly model scene evolution and actions through recursive refinement, enabling effective long-horizon planning in autonomous driving scenarios.
AI-generated summary
A plausible scene evolution depends on the maneuver being considered, while a good maneuver depends on how the scene may evolve. Existing World Action Models (WAMs) largely miss this reciprocity, treating world prediction and action generation as either isolated parallel branches or rigid predict-then-plan pipelines. We formalize this perspective as World-Action Interactive Models (WAIMs), and instantiate it in autonomous driving with DAWN (Denoising Actions and World iNteractive model), a simple yet strong latent generative baseline. DAWN operates in a compact semantic latent space and couples a World Predictor with a World-Conditioned Action Denoiser: the predicted world hypothesis conditions action denoising, while the denoised action hypothesis is fed back to update the world prediction, so that both are recursively refined during inference. Rather than eliminating test-time world evolution altogether or rolling out the full future in pixel space, DAWN performs a short explicit latent rollout that is sufficient to support long-horizon trajectory generation in complex interactive scenes. Experiments show that DAWN achieves strong planning performance and favorable safety-related results across multiple autonomous driving benchmarks. More broadly, our results suggest that interactive world-action generation is a principled path toward truly actionable world models.
Community
Existing World Action Models (WAMs) largely miss this reciprocity, treating world prediction and action generation as either isolated parallel branches or rigid predict-then-plan pipelines. We formalize this perspective as World-Action Interactive Models (WAIMs), and instantiate it in autonomous driving with DAWN (Denoising Actions and World iNteractive model), a simple yet strong latent generative baseline. DAWN operates in a compact semantic latent space and couples a World Predictor with a World-Conditioned Action Denoiser: the predicted world hypothesis conditions action denoising, while the denoised action hypothesis is fed back to update the world prediction, so that both are recursively refined during inference. Rather than eliminating test-time world evolution altogether or rolling out the full future in pixel space, DAWN performs a short explicit latent rollout that is sufficient to support long-horizon trajectory generation in complex interactive scenes.
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.11550 in a model README.md to link it from this page.
Cite arxiv.org/abs/2605.11550 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.11550 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.