Autonomous agents have rapidly matured as task executors and seen widespread deployment via harnesses such as OpenClaw. Safety concerns have rightly drawn growing research attention, and beneath them lie the values silently steering agent behavior. Existing value benchmarks, however, remain confined to LLMs, leaving agent values largely uncharted. From intuitive, empirical, and theoretical vantage points, we show that an agent's values diverge from those of its underlying LLM, and the agentic modality further introduces dataset-, evaluation-, and system-level challenges absent from text-only protocols. We close this gap with Agent-ValueBench, the first benchmark dedicated to agent values. It features 394 executable environments across 16 domains, offering 4,335 value-conflict tasks that cover 28 value systems and 332 dimensions. Every instance is co-synthesized through our purpose-built end-to-end pipeline and curated per-instance by professional psychologists. Each task ships with two pole-aligned golden trajectories whose checkpoints anchor a trajectory-level rubric-based judge. Benchmarking 14 frontier proprietary and open-weights models across 4 mainstream harnesses, we uncover three concerted findings. Agent values first manifest as a Value Tide of cross-model homogeneity beneath interpretable counter-currents. This tide bends non-additively under harness pull, and yet more decisively under deliberate steering via embedded skills. Together these results signal that the agent-alignment lever is shifting from classical model alignment and prompt steering toward harness alignment and skill steering.</p>\n","updatedAt":"2026-05-13T03:23:52.430Z","author":{"_id":"653d141578214ec5d5be1df3","avatarUrl":"/avatars/cc961d0fbae6a7af8ab92a402e71c771.svg","fullname":"Haoran Ye","name":"henry-yeh","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":1,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9002368450164795},"editors":["henry-yeh"],"editorAvatarUrls":["/avatars/cc961d0fbae6a7af8ab92a402e71c771.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.10365","authors":[{"_id":"6a02c8e2b823258e761236e4","user":{"_id":"66a0aa4599e3c1a486f16ab9","avatarUrl":"/avatars/bd87128a87b80f44a08d3dbe1fdd891e.svg","isPro":false,"fullname":"Haonan Dong","user":"insistence","type":"user","name":"insistence"},"name":"Haonan Dong","status":"claimed_verified","statusLastChangedAt":"2026-05-13T07:53:54.040Z","hidden":false},{"_id":"6a02c8e2b823258e761236e5","name":"Qiguan Feng","hidden":false},{"_id":"6a02c8e2b823258e761236e6","name":"Kehan Jiang","hidden":false},{"_id":"6a02c8e2b823258e761236e7","name":"Haoran Ye","hidden":false},{"_id":"6a02c8e2b823258e761236e8","name":"Xin Zhang","hidden":false},{"_id":"6a02c8e2b823258e761236e9","name":"Guojie Song","hidden":false}],"publishedAt":"2026-05-11T00:00:00.000Z","submittedOnDailyAt":"2026-05-13T00:00:00.000Z","title":"Agent-ValueBench: A Comprehensive Benchmark for Evaluating Agent Values","submittedOnDailyBy":{"_id":"653d141578214ec5d5be1df3","avatarUrl":"/avatars/cc961d0fbae6a7af8ab92a402e71c771.svg","isPro":false,"fullname":"Haoran Ye","user":"henry-yeh","type":"user","name":"henry-yeh"},"summary":"Autonomous agents have rapidly matured as task executors and seen widespread deployment via harnesses such as OpenClaw. Safety concerns have rightly drawn growing research attention, and beneath them lie the values silently steering agent behavior. Existing value benchmarks, however, remain confined to LLMs, leaving agent values largely uncharted. From intuitive, empirical, and theoretical vantage points, we show that an agent's values diverge from those of its underlying LLM, and the agentic modality further introduces dataset-, evaluation-, and system-level challenges absent from text-only protocols. We close this gap with Agent-ValueBench, the first benchmark dedicated to agent values. It features 394 executable environments across 16 domains, offering 4,335 value-conflict tasks that cover 28 value systems and 332 dimensions. Every instance is co-synthesized through our purpose-built end-to-end pipeline and curated per-instance by professional psychologists. Each task ships with two pole-aligned golden trajectories whose checkpoints anchor a trajectory-level rubric-based judge. Benchmarking 14 frontier proprietary and open-weights models across 4 mainstream harnesses, we uncover three concerted findings. Agent values first manifest as a Value Tide of cross-model homogeneity beneath interpretable counter-currents. This tide bends non-additively under harness pull, and yet more decisively under deliberate steering via embedded skills. Together these results signal that the agent-alignment lever is shifting from classical model alignment and prompt steering toward harness alignment and skill steering.","upvotes":7,"discussionId":"6a02c8e2b823258e761236ea","projectPage":"https://valuebyte-ai.github.io/Agent-ValueBench.github.io/","githubRepo":"https://github.com/ValueByte-AI/Agent-ValueBench","githubRepoAddedBy":"user","ai_summary":"Autonomous agents exhibit distinct value systems from underlying language models, requiring new benchmarking approaches to assess alignment across diverse execution environments.","ai_keywords":["autonomous agents","language models","value benchmarks","agent values","task execution","safety concerns","value systems","alignment","harnesses","open weights models","prompt steering","skill steering"],"githubStars":9,"organization":{"_id":"63fd9ec5ed9eead590ff216b","name":"PKU1898","fullname":"Peking University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677565514018-61f8e5934a8e5a275b2b3e5a.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"66a0aa4599e3c1a486f16ab9","avatarUrl":"/avatars/bd87128a87b80f44a08d3dbe1fdd891e.svg","isPro":false,"fullname":"Haonan Dong","user":"insistence","type":"user"},{"_id":"653d141578214ec5d5be1df3","avatarUrl":"/avatars/cc961d0fbae6a7af8ab92a402e71c771.svg","isPro":false,"fullname":"Haoran Ye","user":"henry-yeh","type":"user"},{"_id":"69da18a6e1d5396c315b4c0e","avatarUrl":"/avatars/ab5a074aa942e81753e8f711760dafcb.svg","isPro":false,"fullname":"吴弈笛","user":"Eddie345","type":"user"},{"_id":"69da07680af20b86e38e9325","avatarUrl":"/avatars/6a55b260b175ec40603696a1ecf88ebe.svg","isPro":false,"fullname":"Jiale Dai","user":"daijiale","type":"user"},{"_id":"65b218f047674826d17cf3b4","avatarUrl":"/avatars/271b7c6a5ce693b446fe681fb69a9c65.svg","isPro":false,"fullname":"Lee Ming","user":"Djunnn","type":"user"},{"_id":"6a01700a3703bdf554b22cd0","avatarUrl":"/avatars/7248b811c07aa4716666aabd41c6da77.svg","isPro":false,"fullname":"Qiguan Feng","user":"libertas24X","type":"user"},{"_id":"64edc740af912efd1a010846","avatarUrl":"/avatars/80561cd4e95ea45b82ff866e3023b9d8.svg","isPro":false,"fullname":"Wenhao Zhu","user":"whzhu","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"63fd9ec5ed9eead590ff216b","name":"PKU1898","fullname":"Peking University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/1677565514018-61f8e5934a8e5a275b2b3e5a.png"}}">
Agent-ValueBench: A Comprehensive Benchmark for Evaluating Agent Values
Abstract
Autonomous agents exhibit distinct value systems from underlying language models, requiring new benchmarking approaches to assess alignment across diverse execution environments.
AI-generated summary
Autonomous agents have rapidly matured as task executors and seen widespread deployment via harnesses such as OpenClaw. Safety concerns have rightly drawn growing research attention, and beneath them lie the values silently steering agent behavior. Existing value benchmarks, however, remain confined to LLMs, leaving agent values largely uncharted. From intuitive, empirical, and theoretical vantage points, we show that an agent's values diverge from those of its underlying LLM, and the agentic modality further introduces dataset-, evaluation-, and system-level challenges absent from text-only protocols. We close this gap with Agent-ValueBench, the first benchmark dedicated to agent values. It features 394 executable environments across 16 domains, offering 4,335 value-conflict tasks that cover 28 value systems and 332 dimensions. Every instance is co-synthesized through our purpose-built end-to-end pipeline and curated per-instance by professional psychologists. Each task ships with two pole-aligned golden trajectories whose checkpoints anchor a trajectory-level rubric-based judge. Benchmarking 14 frontier proprietary and open-weights models across 4 mainstream harnesses, we uncover three concerted findings. Agent values first manifest as a Value Tide of cross-model homogeneity beneath interpretable counter-currents. This tide bends non-additively under harness pull, and yet more decisively under deliberate steering via embedded skills. Together these results signal that the agent-alignment lever is shifting from classical model alignment and prompt steering toward harness alignment and skill steering.
Community
Autonomous agents have rapidly matured as task executors and seen widespread deployment via harnesses such as OpenClaw. Safety concerns have rightly drawn growing research attention, and beneath them lie the values silently steering agent behavior. Existing value benchmarks, however, remain confined to LLMs, leaving agent values largely uncharted. From intuitive, empirical, and theoretical vantage points, we show that an agent's values diverge from those of its underlying LLM, and the agentic modality further introduces dataset-, evaluation-, and system-level challenges absent from text-only protocols. We close this gap with Agent-ValueBench, the first benchmark dedicated to agent values. It features 394 executable environments across 16 domains, offering 4,335 value-conflict tasks that cover 28 value systems and 332 dimensions. Every instance is co-synthesized through our purpose-built end-to-end pipeline and curated per-instance by professional psychologists. Each task ships with two pole-aligned golden trajectories whose checkpoints anchor a trajectory-level rubric-based judge. Benchmarking 14 frontier proprietary and open-weights models across 4 mainstream harnesses, we uncover three concerted findings. Agent values first manifest as a Value Tide of cross-model homogeneity beneath interpretable counter-currents. This tide bends non-additively under harness pull, and yet more decisively under deliberate steering via embedded skills. Together these results signal that the agent-alignment lever is shifting from classical model alignment and prompt steering toward harness alignment and skill steering.
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.10365 in a model README.md to link it from this page.
Cite arxiv.org/abs/2605.10365 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.10365 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.