Hugging Face Daily Papers · · 3 min read

RoboEvolve: Co-Evolving Planner-Simulator for Robotic Manipulation with Limited Data

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

<a href=\"https://arxiv.org/pdf/2605.13775\" rel=\"nofollow\">https://arxiv.org/pdf/2605.13775</a></p>\n","updatedAt":"2026-05-14T02:52:41.823Z","author":{"_id":"6581a9e2e4bcbca0322e3608","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/IxvVmIg2YQaqa0ZEV6JPa.png","fullname":"Xianfeng Wu","name":"Beckham808","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":7,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.4470284879207611},"editors":["Beckham808"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/IxvVmIg2YQaqa0ZEV6JPa.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.13775","authors":[{"_id":"6a0536d9b1a8cbabc9f0875d","user":{"_id":"6570450a78d7aca0c361a177","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6570450a78d7aca0c361a177/MX7jHhTQwLs-BvYIu5rqb.jpeg","isPro":false,"fullname":"Harold Chen","user":"Harold328","type":"user","name":"Harold328"},"name":"Harold Haodong Chen","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:55:39.829Z","hidden":false},{"_id":"6a0536d9b1a8cbabc9f0875e","name":"Sirui Chen","hidden":false},{"_id":"6a0536d9b1a8cbabc9f0875f","name":"Yingjie Xu","hidden":false},{"_id":"6a0536d9b1a8cbabc9f08760","name":"Wenhang Ge","hidden":false},{"_id":"6a0536d9b1a8cbabc9f08761","name":"Ying-Cong Chen","hidden":false}],"publishedAt":"2026-05-13T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"RoboEvolve: Co-Evolving Planner-Simulator for Robotic Manipulation with Limited Data","submittedOnDailyBy":{"_id":"6581a9e2e4bcbca0322e3608","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/IxvVmIg2YQaqa0ZEV6JPa.png","isPro":false,"fullname":"Xianfeng Wu","user":"Beckham808","type":"user","name":"Beckham808"},"summary":"The scalability of robotic manipulation is fundamentally bottlenecked by the scarcity of task-aligned physical interaction data. While vision-language models (VLMs) and video generation models (VGMs) hold promise for autonomous data synthesis, they suffer from semantic-spatial misalignment and physical hallucinations, respectively. To bridge this gap, we introduce RoboEvolve, a novel framework that couples a VLM planner and a VGM simulator into a mutually reinforcing co-evolutionary loop. Operating purely on unlabeled seed images, RoboEvolve leverages a cognitive-inspired dual-phase mechanism: (i) daytime exploration fosters physically grounded behavioral discovery through a semantic-controlled multi-granular reward, and (ii) nighttime consolidation mines \"near-miss\" failures to stabilize policy optimization. Guided by an autonomous progressive curriculum, the system naturally scales from simple atomic actions to complex tasks. Extensive experiments demonstrate that RoboEvolve (I) achieves superior effectiveness, elevating base planners by 30 absolute points and amplifying simulator success by 48% on average; (II) exhibits extreme data efficiency, surpassing fully supervised baselines with merely 500 unlabeled seeds--a 50x reduction; and (III) demonstrates robust continual learning without catastrophic forgetting.","upvotes":6,"discussionId":"6a0536dab1a8cbabc9f08762","ai_summary":"RoboEvolve combines vision-language and video generation models in a co-evolutionary framework to enable scalable robotic manipulation with improved data efficiency and continuous learning capabilities.","ai_keywords":["vision-language models","video generation models","co-evolutionary loop","semantic-controlled multi-granular reward","autonomous progressive curriculum","continual learning","catastrophic forgetting"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"661e3ac525f93ef52e245f78","avatarUrl":"/avatars/923226a68aa549d66b78cdb81d8b5e16.svg","isPro":false,"fullname":"nullbjj","user":"wenjieshu","type":"user"},{"_id":"6570450a78d7aca0c361a177","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6570450a78d7aca0c361a177/MX7jHhTQwLs-BvYIu5rqb.jpeg","isPro":false,"fullname":"Harold Chen","user":"Harold328","type":"user"},{"_id":"6343e37f73b4f9cedab1c846","avatarUrl":"/avatars/2638af4626e8a4e3a95f845b94ad94f6.svg","isPro":false,"fullname":"Leheng Li","user":"lilelife","type":"user"},{"_id":"6806464ed918f6d2fee2bc8b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6806464ed918f6d2fee2bc8b/rgpG2oO0m6PT0KltCF_Wf.jpeg","isPro":false,"fullname":"Chenfei Liao","user":"Chenfei-Liao","type":"user"},{"_id":"674aa9af9494dd9106006c27","avatarUrl":"/avatars/2f04bb009983ec0ade738aa7941cf6dc.svg","isPro":false,"fullname":"Zixin Zhang","user":"zhangzixin02","type":"user"},{"_id":"69a39d8acbf2fb891a0ed7b2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/rOIJDR5rvYMHZ0y_d01cI.png","isPro":false,"fullname":"서건우","user":"samuelnelson25","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.13775.md"}">
Papers
arxiv:2605.13775

RoboEvolve: Co-Evolving Planner-Simulator for Robotic Manipulation with Limited Data

Published on May 13
· Submitted by
Xianfeng Wu
on May 14
Authors:
,
,
,

Abstract

RoboEvolve combines vision-language and video generation models in a co-evolutionary framework to enable scalable robotic manipulation with improved data efficiency and continuous learning capabilities.

AI-generated summary

The scalability of robotic manipulation is fundamentally bottlenecked by the scarcity of task-aligned physical interaction data. While vision-language models (VLMs) and video generation models (VGMs) hold promise for autonomous data synthesis, they suffer from semantic-spatial misalignment and physical hallucinations, respectively. To bridge this gap, we introduce RoboEvolve, a novel framework that couples a VLM planner and a VGM simulator into a mutually reinforcing co-evolutionary loop. Operating purely on unlabeled seed images, RoboEvolve leverages a cognitive-inspired dual-phase mechanism: (i) daytime exploration fosters physically grounded behavioral discovery through a semantic-controlled multi-granular reward, and (ii) nighttime consolidation mines "near-miss" failures to stabilize policy optimization. Guided by an autonomous progressive curriculum, the system naturally scales from simple atomic actions to complex tasks. Extensive experiments demonstrate that RoboEvolve (I) achieves superior effectiveness, elevating base planners by 30 absolute points and amplifying simulator success by 48% on average; (II) exhibits extreme data efficiency, surpassing fully supervised baselines with merely 500 unlabeled seeds--a 50x reduction; and (III) demonstrates robust continual learning without catastrophic forgetting.

Community

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.13775
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.13775 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.13775 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.13775 in a Space README.md to link it from this page.

Collections including this paper 1

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers