Hugging Face Daily Papers · · 3 min read

The Many Faces of On-Policy Distillation: Pitfalls, Mechanisms, and Fixes

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

Excited to share our paper on On-Policy Distillation!</p>\n","updatedAt":"2026-05-13T02:21:17.602Z","author":{"_id":"65621fd68631d43d2baf33b2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/1JemNCnPkS1mE3SNygsE2.png","fullname":"siqi zhu","name":"zsqzz","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":10,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8351050615310669},"editors":["zsqzz"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/1JemNCnPkS1mE3SNygsE2.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.11182","authors":[{"_id":"6a03def486b054ce2fa40d61","user":{"_id":"65621fd68631d43d2baf33b2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/1JemNCnPkS1mE3SNygsE2.png","isPro":false,"fullname":"siqi zhu","user":"zsqzz","type":"user","name":"zsqzz"},"name":"Siqi Zhu","status":"claimed_verified","statusLastChangedAt":"2026-05-13T07:50:49.566Z","hidden":false},{"_id":"6a03def486b054ce2fa40d62","user":{"_id":"68905a353cf91a8e828fd8a1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68905a353cf91a8e828fd8a1/lyXMELqDGOHNlXeYlTL5X.jpeg","isPro":false,"fullname":"Xuyan Ye","user":"LulaCola","type":"user","name":"LulaCola"},"name":"Xuyan Ye","status":"claimed_verified","statusLastChangedAt":"2026-05-13T07:50:52.353Z","hidden":false},{"_id":"6a03def486b054ce2fa40d63","name":"Hongyu Lu","hidden":false},{"_id":"6a03def486b054ce2fa40d64","name":"Weiye Shi","hidden":false},{"_id":"6a03def486b054ce2fa40d65","name":"Ge Liu","hidden":false}],"publishedAt":"2026-05-11T00:00:00.000Z","submittedOnDailyAt":"2026-05-13T00:00:00.000Z","title":"The Many Faces of On-Policy Distillation: Pitfalls, Mechanisms, and Fixes","submittedOnDailyBy":{"_id":"65621fd68631d43d2baf33b2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/1JemNCnPkS1mE3SNygsE2.png","isPro":false,"fullname":"siqi zhu","user":"zsqzz","type":"user","name":"zsqzz"},"summary":"On-policy distillation (OPD) and on-policy self-distillation (OPSD) have emerged as promising post-training methods for large language models, offering dense token-level supervision on trajectories sampled from the model's own policy. However, existing results on their effectiveness remain mixed: while OP(S)D has shown promise in system prompt and knowledge internalization, recent studies also report instability and degradation. In this work, we present a comprehensive empirical study of when OPD and OPSD work, when they fail, and why. We find that OPD on mathematical reasoning is highly sensitive to teacher choice and loss formulation, whereas OPSD fails in our tested settings due to test-time absence of instance-specific privileged information (PI). In contrast, OPSD is effective when PI represents a shared latent rule, such as a system prompt or alignment preference. We identify three failure mechanisms: (1) distribution mismatch between teacher and student caused by conditioning on student-generated prefixes, (2) optimization instability from biased TopK reverse-KL gradients, and (3) an OPSD-specific limitation where the student learns a PI-free policy that aggregates PI-conditioned teachers, which is insufficient when PI is instance-specific. We further show that stop-gradient TopK objectives, RLVR-adapted teachers, and SFT-stabilized students mitigate these failures.","upvotes":4,"discussionId":"6a03def586b054ce2fa40d66","projectPage":"https://ulab-uiuc.github.io/OPD_website/","githubRepo":"https://github.com/ulab-uiuc/Open-On-Policy-Distillation","githubRepoAddedBy":"user","ai_summary":"On-policy distillation and self-distillation methods for large language models exhibit varying effectiveness depending on teacher choice, loss formulation, and instance-specific privileged information availability, with identified failure mechanisms including distribution mismatch, optimization instability, and PI-free policy learning.","ai_keywords":["on-policy distillation","on-policy self-distillation","large language models","token-level supervision","policy gradient","reverse-KL gradients","TopK","stop-gradient","RLVR","SFT"],"githubStars":1,"organization":{"_id":"65448bef5b5d9185ba3202b9","name":"UIUC-CS","fullname":"University of Illinois at Urbana-Champaign","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/65448b21fcb96b8b48733729/ycqcXFayMTTD_KpE37067.jpeg"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"65621fd68631d43d2baf33b2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/1JemNCnPkS1mE3SNygsE2.png","isPro":false,"fullname":"siqi zhu","user":"zsqzz","type":"user"},{"_id":"68905a353cf91a8e828fd8a1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68905a353cf91a8e828fd8a1/lyXMELqDGOHNlXeYlTL5X.jpeg","isPro":false,"fullname":"Xuyan Ye","user":"LulaCola","type":"user"},{"_id":"68bda0db2de2f8b41fab3270","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/68bda0db2de2f8b41fab3270/15i6ShvHTzd9AhswjSs6S.jpeg","isPro":false,"fullname":"Shengxuan Qiu","user":"SXQiu","type":"user"},{"_id":"627a124ffe55fa0f8ce0eaf7","avatarUrl":"/avatars/41e0dc029faed6dc45d620c5fe2652a5.svg","isPro":false,"fullname":"Serendipity","user":"Yuhan","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"65448bef5b5d9185ba3202b9","name":"UIUC-CS","fullname":"University of Illinois at Urbana-Champaign","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/65448b21fcb96b8b48733729/ycqcXFayMTTD_KpE37067.jpeg"}}">
Papers
arxiv:2605.11182

The Many Faces of On-Policy Distillation: Pitfalls, Mechanisms, and Fixes

Published on May 11
· Submitted by
siqi zhu
on May 13
Authors:
,
,

Abstract

On-policy distillation and self-distillation methods for large language models exhibit varying effectiveness depending on teacher choice, loss formulation, and instance-specific privileged information availability, with identified failure mechanisms including distribution mismatch, optimization instability, and PI-free policy learning.

AI-generated summary

On-policy distillation (OPD) and on-policy self-distillation (OPSD) have emerged as promising post-training methods for large language models, offering dense token-level supervision on trajectories sampled from the model's own policy. However, existing results on their effectiveness remain mixed: while OP(S)D has shown promise in system prompt and knowledge internalization, recent studies also report instability and degradation. In this work, we present a comprehensive empirical study of when OPD and OPSD work, when they fail, and why. We find that OPD on mathematical reasoning is highly sensitive to teacher choice and loss formulation, whereas OPSD fails in our tested settings due to test-time absence of instance-specific privileged information (PI). In contrast, OPSD is effective when PI represents a shared latent rule, such as a system prompt or alignment preference. We identify three failure mechanisms: (1) distribution mismatch between teacher and student caused by conditioning on student-generated prefixes, (2) optimization instability from biased TopK reverse-KL gradients, and (3) an OPSD-specific limitation where the student learns a PI-free policy that aggregates PI-conditioned teachers, which is insufficient when PI is instance-specific. We further show that stop-gradient TopK objectives, RLVR-adapted teachers, and SFT-stabilized students mitigate these failures.

Community

Paper author Paper submitter about 19 hours ago

Excited to share our paper on On-Policy Distillation!

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.11182 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.11182 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.11182 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers