Hugging Face Daily Papers · · 3 min read

Learning Agentic Policy from Action Guidance

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

Explore agentic RL through action data, without SFT.</p>\n","updatedAt":"2026-05-14T06:38:45.190Z","author":{"_id":"666a83e9b2d8397c1e545785","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/666a83e9b2d8397c1e545785/7PxrVl38zWUbjAsZThHHb.jpeg","fullname":"Yuxiang Ji","name":"Yux1ang","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7798931002616882},"editors":["Yux1ang"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/666a83e9b2d8397c1e545785/7PxrVl38zWUbjAsZThHHb.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.12004","authors":[{"_id":"6a05286db1a8cbabc9f08650","user":{"_id":"666a83e9b2d8397c1e545785","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/666a83e9b2d8397c1e545785/7PxrVl38zWUbjAsZThHHb.jpeg","isPro":false,"fullname":"Yuxiang Ji","user":"Yux1ang","type":"user","name":"Yux1ang"},"name":"Yuxiang Ji","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:56:20.671Z","hidden":false},{"_id":"6a05286db1a8cbabc9f08651","name":"Zengbin Wang","hidden":false},{"_id":"6a05286db1a8cbabc9f08652","name":"Yong Wang","hidden":false},{"_id":"6a05286db1a8cbabc9f08653","name":"Shidong Yang","hidden":false},{"_id":"6a05286db1a8cbabc9f08654","name":"Ziyu Ma","hidden":false},{"_id":"6a05286db1a8cbabc9f08655","name":"Guanhua Chen","hidden":false},{"_id":"6a05286db1a8cbabc9f08656","name":"Zonghua Sun","hidden":false},{"_id":"6a05286db1a8cbabc9f08657","name":"Liaoni Wu","hidden":false},{"_id":"6a05286db1a8cbabc9f08658","name":"Xiangxiang Chu","hidden":false}],"publishedAt":"2026-05-12T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"Learning Agentic Policy from Action Guidance","submittedOnDailyBy":{"_id":"666a83e9b2d8397c1e545785","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/666a83e9b2d8397c1e545785/7PxrVl38zWUbjAsZThHHb.jpeg","isPro":false,"fullname":"Yuxiang Ji","user":"Yux1ang","type":"user","name":"Yux1ang"},"summary":"Agentic reinforcement learning (RL) for Large Language Models (LLMs) critically depends on the exploration capability of the base policy, as training signals emerge only within its in-capability region. For tasks where the base policy cannot reach reward states, additional training or external guidance is needed to recover effective learning signals. Rather than relying on costly iterative supervised fine tuning (SFT), we exploit the abundant action data generated in everyday human interactions. We propose ActGuide-RL, which injects action data as plan-style reference guidance, enabling the agentic policy to overcome reachability barriers to reward states. Guided and unguided rollouts are then jointly optimized via mixed-policy training, internalizing the exploration gains back into the unguided policy. Motivated by a theoretical and empirical analysis of the benefit-risk trade-off, we adopt a minimal intervention principle that invokes guidance only as an adaptive fallback, matching task difficulty while minimizing off-policy risk. On search-agent benchmarks, ActGuide-RL substantially improves over zero RL (+10.7 pp on GAIA and +19 pp on XBench with Qwen3-4B), and performs on par with the SFT+RL pipeline without any cold start. This suggests a new paradigm for agentic RL that reduces the reliance on heavy SFT data by using scalable action guidance instead.","upvotes":11,"discussionId":"6a05286eb1a8cbabc9f08659","githubRepo":"https://github.com/AMAP-ML/ActGuide-RL","githubRepoAddedBy":"user","ai_summary":"Agentic reinforcement learning for large language models leverages action data from human interactions as reference guidance to improve exploration and reduce dependence on costly supervised fine-tuning.","ai_keywords":["agentic reinforcement learning","Large Language Models","exploration capability","base policy","reward states","supervised fine tuning","mixed-policy training","action data","plan-style reference guidance","reachability barriers","off-policy risk","minimal intervention principle","search-agent benchmarks","Qwen3-4B"],"githubStars":5},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"680e2fe768923c0dc56319ed","avatarUrl":"/avatars/eae5331a9910c5090a2e20b5f2444cdb.svg","isPro":false,"fullname":"Thrush","user":"StoneHana","type":"user"},{"_id":"663bbb61ec1aafe3d6c05558","avatarUrl":"/avatars/b2f593d0ae0adbaad9a7a99b490e3a2b.svg","isPro":false,"fullname":"Ziyu Ma","user":"poiuytrewq123","type":"user"},{"_id":"656c9cfef7be0986b49934ea","avatarUrl":"/avatars/2030e77c28fb4c518b692cd9a20de665.svg","isPro":false,"fullname":"Zengbin Wang","user":"MuMing0102","type":"user"},{"_id":"6757e497cc575c57d63df56e","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/k0D40xx760wmrwQj951dy.png","isPro":false,"fullname":"Zengbin Wang","user":"wzb-bupt","type":"user"},{"_id":"682621ac1b7e24fb1fa261bd","avatarUrl":"/avatars/616045bb7acc371367ded41320863dc2.svg","isPro":false,"fullname":"Qingyuan Cai","user":"Andyen512","type":"user"},{"_id":"697c72fb350f7d79ab34c8a8","avatarUrl":"/avatars/c8eca7c1c5ac4efa7d33c35fd4eae8e5.svg","isPro":false,"fullname":"Yixing Zhang","user":"YixingZhang","type":"user"},{"_id":"697ca2910ecc46d6f9f273f7","avatarUrl":"/avatars/5d9368839cf9e9189e3dd500f0acec2c.svg","isPro":false,"fullname":"yongoideng","user":"Yooopee","type":"user"},{"_id":"663c3ff19b8151660f0fc14f","avatarUrl":"/avatars/c2e78a5062c545078af61b4e94f97d8f.svg","isPro":false,"fullname":"Yang","user":"special99","type":"user"},{"_id":"64d1dc5273174cecdffc97d3","avatarUrl":"/avatars/6564e6b68fee9673f75b6366adf39a3b.svg","isPro":false,"fullname":"Wang Yong","user":"seashell11","type":"user"},{"_id":"666a83e9b2d8397c1e545785","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/666a83e9b2d8397c1e545785/7PxrVl38zWUbjAsZThHHb.jpeg","isPro":false,"fullname":"Yuxiang Ji","user":"Yux1ang","type":"user"},{"_id":"69bba3bd3661850827d7faf9","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/5vDmkl0Ar_aOKKo_wbb2r.jpeg","isPro":false,"fullname":"Екатерина Семёнов","user":"alexandervig921","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.12004.md"}">
Papers
arxiv:2605.12004

Learning Agentic Policy from Action Guidance

Published on May 12
· Submitted by
Yuxiang Ji
on May 14
Authors:
,
,
,
,
,
,
,

Abstract

Agentic reinforcement learning for large language models leverages action data from human interactions as reference guidance to improve exploration and reduce dependence on costly supervised fine-tuning.

AI-generated summary

Agentic reinforcement learning (RL) for Large Language Models (LLMs) critically depends on the exploration capability of the base policy, as training signals emerge only within its in-capability region. For tasks where the base policy cannot reach reward states, additional training or external guidance is needed to recover effective learning signals. Rather than relying on costly iterative supervised fine tuning (SFT), we exploit the abundant action data generated in everyday human interactions. We propose ActGuide-RL, which injects action data as plan-style reference guidance, enabling the agentic policy to overcome reachability barriers to reward states. Guided and unguided rollouts are then jointly optimized via mixed-policy training, internalizing the exploration gains back into the unguided policy. Motivated by a theoretical and empirical analysis of the benefit-risk trade-off, we adopt a minimal intervention principle that invokes guidance only as an adaptive fallback, matching task difficulty while minimizing off-policy risk. On search-agent benchmarks, ActGuide-RL substantially improves over zero RL (+10.7 pp on GAIA and +19 pp on XBench with Qwen3-4B), and performs on par with the SFT+RL pipeline without any cold start. This suggests a new paradigm for agentic RL that reduces the reliance on heavy SFT data by using scalable action guidance instead.

Community

Paper author Paper submitter about 19 hours ago

Explore agentic RL through action data, without SFT.

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.12004
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.12004 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.12004 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.12004 in a Space README.md to link it from this page.

Collections including this paper 1

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers