<strong>TLDR</strong>: FrameSkip is a data-layer framework that improves VLA training by selectively retaining only the most informative frames—based on action variation, visual-action coherence, and task-progress cues—rather than uniformly sampling all trajectory frames, achieving a 76.15% macro-average success rate across three benchmarks while using just 20% of the original frames.</p>\n","updatedAt":"2026-05-14T07:54:35.355Z","author":{"_id":"63d3b5f1640bb0f77173baea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674819020331-noauth.jpeg","fullname":"yubin","name":"VLyb","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":6,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7952932715415955},"editors":["VLyb"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674819020331-noauth.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.13757","authors":[{"_id":"6a057f1db1a8cbabc9f08959","name":"Bin Yu","hidden":false},{"_id":"6a057f1db1a8cbabc9f0895a","name":"Shijie Lian","hidden":false},{"_id":"6a057f1db1a8cbabc9f0895b","name":"Xiaopeng Lin","hidden":false},{"_id":"6a057f1db1a8cbabc9f0895c","name":"Zhaolong Shen","hidden":false},{"_id":"6a057f1db1a8cbabc9f0895d","name":"Yuliang Wei","hidden":false},{"_id":"6a057f1db1a8cbabc9f0895e","name":"Changti Wu","hidden":false},{"_id":"6a057f1db1a8cbabc9f0895f","name":"Hang Yuan","hidden":false},{"_id":"6a057f1db1a8cbabc9f08960","name":"Haishan Liu","hidden":false},{"_id":"6a057f1db1a8cbabc9f08961","name":"Bailing Wang","hidden":false},{"_id":"6a057f1db1a8cbabc9f08962","name":"Cong Huang","hidden":false},{"_id":"6a057f1db1a8cbabc9f08963","name":"Kai Chen","hidden":false}],"publishedAt":"2026-05-13T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"FrameSkip: Learning from Fewer but More Informative Frames in VLA Training","submittedOnDailyBy":{"_id":"63d3b5f1640bb0f77173baea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674819020331-noauth.jpeg","isPro":false,"fullname":"yubin","user":"VLyb","type":"user","name":"VLyb"},"summary":"Vision-Language-Action (VLA) policies are commonly trained from dense robot demonstration trajectories, often collected through teleoperation, by sampling every recorded frame as if it provided equally useful supervision. We argue that this convention creates a temporal supervision imbalance: long low-change segments dominate the training stream, while manipulation-critical transitions such as alignment, contact, grasping, and release appear only sparsely. We introduce FrameSkip, a data-layer frame selection framework that scores trajectory frames using action variation, visual-action coherence, task-progress priors, and gripper-transition preservation, then remaps training samples toward high-importance frames under a target retention ratio. Because FrameSkip operates only in the dataloader, it leaves the VLA architecture, action head, training objective, and inference procedure unchanged. Across RoboCasa-GR1, SimplerEnv, and LIBERO, FrameSkip improves the success-retention trade-off over full-frame training and simpler frame selection variants, achieving a macro-average success rate of 76.15% across the three benchmarks compared with 66.50% for full-frame training while using a compressed trajectory view that retains 20% of unique frames in the main setting.","upvotes":19,"discussionId":"6a057f1db1a8cbabc9f08964","projectPage":"https://huggingface.co/collections/VLyb/frameskip","githubRepo":"https://github.com/ZGC-EmbodyAI/FrameSkip","githubRepoAddedBy":"user","ai_summary":"FrameSkip is a data-layer frame selection method that improves VLA policy training by prioritizing high-importance frames based on action variation and visual-coherence metrics.","ai_keywords":["FrameSkip","Vision-Language-Action policies","robot demonstration trajectories","teleoperation","temporal supervision imbalance","action variation","visual-action coherence","task-progress priors","gripper-transition preservation","dataloader","macro-average success rate"],"githubStars":1,"organization":{"_id":"68896d3a716ee5bfb1428441","name":"ZGCA","fullname":"Zhongguancun Academy","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/6854c3ab09a3ba7d16243875/aZ3tp3lZk1yQoXDwSklye.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"63d3b5f1640bb0f77173baea","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674819020331-noauth.jpeg","isPro":false,"fullname":"yubin","user":"VLyb","type":"user"},{"_id":"6895af65f0f5a0b33804f02d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/FsddFZXkMQqvmGqpYofzq.png","isPro":false,"fullname":"Yukun ShiShi","user":"Yorks11","type":"user"},{"_id":"65ec01fd770aa0e25d9374dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65ec01fd770aa0e25d9374dc/yvLWwBEdAdHb-8EdUHg3n.jpeg","isPro":false,"fullname":"Shijie Lian","user":"LiamLian0727","type":"user"},{"_id":"691fdd4d36d3f9fad4989b62","avatarUrl":"/avatars/97a1ad76fbe5eccb237418ea5fd9746b.svg","isPro":false,"fullname":"James Liu","user":"ymqn941016","type":"user"},{"_id":"67a6128b42d4d2f92e1ceda4","avatarUrl":"/avatars/9710c9e1c8ba9b9944e5adbc7fa804ea.svg","isPro":false,"fullname":"hu","user":"Evanahu","type":"user"},{"_id":"67f8d3be1efce9e5cf4a3a76","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Sf3GAqwIr4eYdv6Mi-G72.png","isPro":false,"fullname":"zy","user":"Sofilyzia","type":"user"},{"_id":"69491ab31b1af1c40a0ac5eb","avatarUrl":"/avatars/241e32efcd52ded0712ec03ab7a6949e.svg","isPro":false,"fullname":"Kailin Deng","user":"iceballoon","type":"user"},{"_id":"64a45bd60c53660b3fe49ff7","avatarUrl":"/avatars/dd15d898283dc5597f0a569e09599ced.svg","isPro":false,"fullname":"cong huang","user":"dfadfad","type":"user"},{"_id":"6a05825bad1913dfb1f1f0ea","avatarUrl":"/avatars/d33c289e6475b2d0efc6dbee0999f565.svg","isPro":false,"fullname":"Kan Wen","user":"Kanwen","type":"user"},{"_id":"6a058423001a87e1f33fdb04","avatarUrl":"/avatars/ad151762ae2c5bac739b57a80c66163a.svg","isPro":false,"fullname":"Tu Youm","user":"Tuyoumy","type":"user"},{"_id":"6972ef07b2266860477d3c6e","avatarUrl":"/avatars/e8b2b087d9715aecb4e10b75678719df.svg","isPro":false,"fullname":"hihihihihihi","user":"hellooohi","type":"user"},{"_id":"664c6083f48f9e269c593105","avatarUrl":"/avatars/8cd69f3811202b6e2a1cb13ee53a8148.svg","isPro":false,"fullname":"Kai Chen","user":"chenkagi","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"68896d3a716ee5bfb1428441","name":"ZGCA","fullname":"Zhongguancun Academy","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/6854c3ab09a3ba7d16243875/aZ3tp3lZk1yQoXDwSklye.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.13757.md"}">
FrameSkip: Learning from Fewer but More Informative Frames in VLA Training
Published on May 13
· Submitted by yubin on May 14 Authors: ,
,
,
,
,
,
,
,
,
,
Abstract
FrameSkip is a data-layer frame selection method that improves VLA policy training by prioritizing high-importance frames based on action variation and visual-coherence metrics.
AI-generated summary
Vision-Language-Action (VLA) policies are commonly trained from dense robot demonstration trajectories, often collected through teleoperation, by sampling every recorded frame as if it provided equally useful supervision. We argue that this convention creates a temporal supervision imbalance: long low-change segments dominate the training stream, while manipulation-critical transitions such as alignment, contact, grasping, and release appear only sparsely. We introduce FrameSkip, a data-layer frame selection framework that scores trajectory frames using action variation, visual-action coherence, task-progress priors, and gripper-transition preservation, then remaps training samples toward high-importance frames under a target retention ratio. Because FrameSkip operates only in the dataloader, it leaves the VLA architecture, action head, training objective, and inference procedure unchanged. Across RoboCasa-GR1, SimplerEnv, and LIBERO, FrameSkip improves the success-retention trade-off over full-frame training and simpler frame selection variants, achieving a macro-average success rate of 76.15% across the three benchmarks compared with 66.50% for full-frame training while using a compressed trajectory view that retains 20% of unique frames in the main setting.
Community
TLDR: FrameSkip is a data-layer framework that improves VLA training by selectively retaining only the most informative frames—based on action variation, visual-action coherence, and task-progress cues—rather than uniformly sampling all trajectory frames, achieving a 76.15% macro-average success rate across three benchmarks while using just 20% of the original frames.
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.13757 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.13757 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.