<span class=\"SVELTE_PARTIAL_HYDRATER contents\" data-target=\"UserMention\" data-props=\"{"user":"librarian-bot"}\"><span class=\"inline-block\"><span class=\"contents\"><a href=\"/librarian-bot\">@<span class=\"underline\">librarian-bot</span></a></span> </span></span> recommend</p>\n","updatedAt":"2026-05-14T07:35:38.590Z","author":{"_id":"6848efd865b3a0bf33d1cd68","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jbGx4HFwd9-ylaUFl0E35.png","fullname":"zhangtiangang","name":"ztg-cv","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7918877601623535},"editors":["ztg-cv"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jbGx4HFwd9-ylaUFl0E35.png"],"reactions":[],"isReport":false}},{"id":"6a06c8b933c250a050db5589","author":{"_id":"6694b9706ad20abb1d1c27ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6694b9706ad20abb1d1c27ef/qmtqaxsiCI6ObRNOPwQFk.jpeg","fullname":"hanhan3344","name":"hanhan3344","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false},"createdAt":"2026-05-15T07:18:17.000Z","type":"comment","data":{"edited":true,"hidden":false,"latest":{"raw":"This paper explores a simple but overlooked question in self-distillation for LLM reasoning: should the teacher always see the full reference reasoning? We identify a teacher-side exposure mismatch, where fully privileged teacher signals can be too strong for the student’s current competence. Instead of fixing the teacher exposure ratio, we propose ATESD, which adaptively controls how much reference reasoning is revealed to the teacher during training. Across AIME 24, AIME 25, and HMMT 25 with Qwen3 models, adaptive teacher exposure consistently improves over strong self-distillation and RL baselines. We hope this work highlights teacher exposure as a useful new training-time control axis for reasoning self-distillation.","html":"<p>This paper explores a simple but overlooked question in self-distillation for LLM reasoning: should the teacher always see the full reference reasoning? We identify a teacher-side exposure mismatch, where fully privileged teacher signals can be too strong for the student’s current competence. Instead of fixing the teacher exposure ratio, we propose ATESD, which adaptively controls how much reference reasoning is revealed to the teacher during training. Across AIME 24, AIME 25, and HMMT 25 with Qwen3 models, adaptive teacher exposure consistently improves over strong self-distillation and RL baselines. We hope this work highlights teacher exposure as a useful new training-time control axis for reasoning self-distillation.</p>\n","updatedAt":"2026-05-15T07:20:52.292Z","author":{"_id":"6694b9706ad20abb1d1c27ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6694b9706ad20abb1d1c27ef/qmtqaxsiCI6ObRNOPwQFk.jpeg","fullname":"hanhan3344","name":"hanhan3344","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.9395925402641296},"editors":["hanhan3344"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/6694b9706ad20abb1d1c27ef/qmtqaxsiCI6ObRNOPwQFk.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.11458","authors":[{"_id":"6a0541c4b1a8cbabc9f0882c","user":{"_id":"6694b9706ad20abb1d1c27ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6694b9706ad20abb1d1c27ef/qmtqaxsiCI6ObRNOPwQFk.jpeg","isPro":false,"fullname":"hanhan3344","user":"hanhan3344","type":"user","name":"hanhan3344"},"name":"Zihao Han","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:55:00.485Z","hidden":false},{"_id":"6a0541c4b1a8cbabc9f0882d","name":"Tiangang Zhang","hidden":false},{"_id":"6a0541c4b1a8cbabc9f0882e","name":"Huaibin Wang","hidden":false},{"_id":"6a0541c4b1a8cbabc9f0882f","name":"Yilun Sun","hidden":false}],"mediaUrls":["https://cdn-uploads.huggingface.co/production/uploads/6694b9706ad20abb1d1c27ef/Eet-meBHTHcN76cBRRp9s.png"],"publishedAt":"2026-05-12T00:00:00.000Z","submittedOnDailyAt":"2026-05-15T00:00:00.000Z","title":"Adaptive Teacher Exposure for Self-Distillation in LLM Reasoning","submittedOnDailyBy":{"_id":"6694b9706ad20abb1d1c27ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6694b9706ad20abb1d1c27ef/qmtqaxsiCI6ObRNOPwQFk.jpeg","isPro":false,"fullname":"hanhan3344","user":"hanhan3344","type":"user","name":"hanhan3344"},"summary":"On-policy self-distillation has become a strong recipe for LLM reasoning, where a privileged teacher supervises the student's own rollouts while conditioning on the reference solution. A design choice shared by nearly all such methods, however, has gone unquestioned: the teacher always sees the full reference reasoning. We argue that this default itself is part of the problem and identify a teacher-side exposure mismatch: when the teacher conditions on reasoning far beyond the student's current competence, the resulting token targets become too strong to absorb. A controlled fixed-exposure sweep makes this concrete on two fronts: 1) full exposure is not reliably the best choice, and 2) student-teacher mismatch grows monotonically as the teacher sees more privileged reasoning. This motivates treating teacher exposure not as a fixed hyperparameter but as a learnable training-time control variable. We therefore propose Adaptive Teacher Exposure for Self-Distillation (ATESD). ATESD models the reveal ratio with a lightweight Beta-policy controller conditioned on compact training-state statistics, and uses one sampled exposure for a short hold window of student updates. To make this exposure controller learnable, we optimize it with a discounted learning-progress reward that scores each held decision by its effect on the student's future improvement rather than its immediate loss change, addressing the delayed credit assignment induced by on-policy distillation. Experiments on AIME 24, AIME 25, and HMMT 25 across Qwen3-{1.7B, 4B, 8B} show that ATESD consistently outperforms competitive self-distillation and RL baselines, improving over OPSD by +0.95, +2.05, and +2.33 Average@12 points respectively, and establishing adaptive teacher exposure as an effective new axis for reasoning self-distillation.","upvotes":4,"discussionId":"6a0541c4b1a8cbabc9f08830","ai_summary":"Adaptive Teacher Exposure for Self-Distillation (ATESD) improves large language model reasoning by dynamically adjusting teacher exposure during training through a learnable policy controller.","ai_keywords":["self-distillation","teacher-student framework","on-policy learning","exposure mismatch","Beta-policy controller","discounted learning-progress reward","delayed credit assignment","reasoning enhancement"],"organization":{"_id":"653b817d32c97d0655575872","name":"ByteDance","fullname":"ByteDance","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/6535c9e88bde2fae19b6fb25/0clr54wj5Ly-RkYU9OXPp.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6694b9706ad20abb1d1c27ef","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6694b9706ad20abb1d1c27ef/qmtqaxsiCI6ObRNOPwQFk.jpeg","isPro":false,"fullname":"hanhan3344","user":"hanhan3344","type":"user"},{"_id":"65799da51f74237369b1e066","avatarUrl":"/avatars/25abec6f1a6fd12046131c737b4b49a9.svg","isPro":false,"fullname":"123","user":"whbwhbwhb","type":"user"},{"_id":"6848efd865b3a0bf33d1cd68","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jbGx4HFwd9-ylaUFl0E35.png","isPro":false,"fullname":"zhangtiangang","user":"ztg-cv","type":"user"},{"_id":"661ab1f1fa3b144a381fa454","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661ab1f1fa3b144a381fa454/IlpZBb9NCjo7ntFwMIH53.png","isPro":true,"fullname":"Urro","user":"urroxyz","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"653b817d32c97d0655575872","name":"ByteDance","fullname":"ByteDance","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/6535c9e88bde2fae19b6fb25/0clr54wj5Ly-RkYU9OXPp.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.11458.md"}">
Adaptive Teacher Exposure for Self-Distillation in LLM Reasoning
Abstract
Adaptive Teacher Exposure for Self-Distillation (ATESD) improves large language model reasoning by dynamically adjusting teacher exposure during training through a learnable policy controller.
AI-generated summary
On-policy self-distillation has become a strong recipe for LLM reasoning, where a privileged teacher supervises the student's own rollouts while conditioning on the reference solution. A design choice shared by nearly all such methods, however, has gone unquestioned: the teacher always sees the full reference reasoning. We argue that this default itself is part of the problem and identify a teacher-side exposure mismatch: when the teacher conditions on reasoning far beyond the student's current competence, the resulting token targets become too strong to absorb. A controlled fixed-exposure sweep makes this concrete on two fronts: 1) full exposure is not reliably the best choice, and 2) student-teacher mismatch grows monotonically as the teacher sees more privileged reasoning. This motivates treating teacher exposure not as a fixed hyperparameter but as a learnable training-time control variable. We therefore propose Adaptive Teacher Exposure for Self-Distillation (ATESD). ATESD models the reveal ratio with a lightweight Beta-policy controller conditioned on compact training-state statistics, and uses one sampled exposure for a short hold window of student updates. To make this exposure controller learnable, we optimize it with a discounted learning-progress reward that scores each held decision by its effect on the student's future improvement rather than its immediate loss change, addressing the delayed credit assignment induced by on-policy distillation. Experiments on AIME 24, AIME 25, and HMMT 25 across Qwen3-{1.7B, 4B, 8B} show that ATESD consistently outperforms competitive self-distillation and RL baselines, improving over OPSD by +0.95, +2.05, and +2.33 Average@12 points respectively, and establishing adaptive teacher exposure as an effective new axis for reasoning self-distillation.
Community
This paper explores a simple but overlooked question in self-distillation for LLM reasoning: should the teacher always see the full reference reasoning? We identify a teacher-side exposure mismatch, where fully privileged teacher signals can be too strong for the student’s current competence. Instead of fixing the teacher exposure ratio, we propose ATESD, which adaptively controls how much reference reasoning is revealed to the teacher during training. Across AIME 24, AIME 25, and HMMT 25 with Qwen3 models, adaptive teacher exposure consistently improves over strong self-distillation and RL baselines. We hope this work highlights teacher exposure as a useful new training-time control axis for reasoning self-distillation.
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.11458 in a model README.md to link it from this page.
Cite arxiv.org/abs/2605.11458 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.11458 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.