Hugging Face Daily Papers · · 6 min read

Edit-Compass & EditReward-Compass: A Unified Benchmark for Image Editing and Reward Modeling

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

Recent image editing models have achieved remarkable progress in instruction following, multimodal understanding, and complex visual editing. However, existing benchmarks often fail to faithfully reflect human judgment, especially for strong frontier models, due to limited task difficulty and coarse-grained evaluation protocols. In parallel, reward models have become increasingly important for RL-based image editing optimization, yet existing reward model benchmarks still rely on unrealistic evaluation settings that deviate from practical RL scenarios. These limitations hinder reliable assessment of both image editing models and reward models. To address these challenges, we introduce Edit-Compass and EditReward-Compass, a unified evaluation suite for image editing and reward modeling. Edit-Compass contains 2,388 carefully annotated instances spanning six progressively challenging task categories, covering capabilities such as world knowledge reasoning, visual reasoning, and multi-image editing. Beyond broad task coverage, Edit-Compass adopts a fine-grained multidimensional evaluation framework based on structured reasoning and carefully designed scoring rubrics. In parallel, EditReward-Compass contains 2,251 preference pairs that simulate realistic reward modeling scenarios during RL optimization.</p>\n","updatedAt":"2026-05-14T01:50:06.520Z","author":{"_id":"673c7319d11b1c2e246ead9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg","fullname":"Yang Shi","name":"DogNeverSleep","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":11,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8901880383491516},"editors":["DogNeverSleep"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.13062","authors":[{"_id":"6a052a2db1a8cbabc9f0867d","name":"Xuehai Bai","hidden":false},{"_id":"6a052a2db1a8cbabc9f0867e","user":{"_id":"673c7319d11b1c2e246ead9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg","isPro":false,"fullname":"Yang Shi","user":"DogNeverSleep","type":"user","name":"DogNeverSleep"},"name":"Yang Shi","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:56:11.312Z","hidden":false},{"_id":"6a052a2db1a8cbabc9f0867f","name":"Yi-Fan Zhang","hidden":false},{"_id":"6a052a2db1a8cbabc9f08680","user":{"_id":"644d2532d185572dd1e48f90","avatarUrl":"/avatars/5831acebb02d8bc8f80f56b7b11c7c69.svg","isPro":false,"fullname":"Zhu","user":"zzzhu","type":"user","name":"zzzhu"},"name":"Xuanyu Zhu","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:56:09.365Z","hidden":false},{"_id":"6a052a2db1a8cbabc9f08681","user":{"_id":"65e71ef39cf349af2940b317","avatarUrl":"/avatars/fc1cd8d3510946fc947d67b16b51834b.svg","isPro":false,"fullname":"Yuran Wang","user":"Ryann829","type":"user","name":"Ryann829"},"name":"Yuran Wang","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:56:07.248Z","hidden":false},{"_id":"6a052a2db1a8cbabc9f08682","user":{"_id":"674e77fa59a127e4eacf5dba","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/674e77fa59a127e4eacf5dba/W7qr94Buvvaio8zhKrEha.jpeg","isPro":false,"fullname":"Yifan Dai","user":"Moonwines","type":"user","name":"Moonwines"},"name":"Yifan Dai","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:56:05.208Z","hidden":false},{"_id":"6a052a2db1a8cbabc9f08683","name":"Xinyu Liu","hidden":false},{"_id":"6a052a2db1a8cbabc9f08684","name":"Yiyan Ji","hidden":false},{"_id":"6a052a2db1a8cbabc9f08685","name":"Xiaoling Gu","hidden":false},{"_id":"6a052a2db1a8cbabc9f08686","name":"Yuanxing Zhang","hidden":false}],"publishedAt":"2026-05-13T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"Edit-Compass & EditReward-Compass: A Unified Benchmark for Image Editing and Reward Modeling","submittedOnDailyBy":{"_id":"673c7319d11b1c2e246ead9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg","isPro":false,"fullname":"Yang Shi","user":"DogNeverSleep","type":"user","name":"DogNeverSleep"},"summary":"Recent image editing models have achieved remarkable progress in instruction following, multimodal understanding, and complex visual editing. However, existing benchmarks often fail to faithfully reflect human judgment, especially for strong frontier models, due to limited task difficulty and coarse-grained evaluation protocols. In parallel, reward models have become increasingly important for RL-based image editing optimization, yet existing reward model benchmarks still rely on unrealistic evaluation settings that deviate from practical RL scenarios. These limitations hinder reliable assessment of both image editing models and reward models. To address these challenges, we introduce Edit-Compass and EditReward-Compass, a unified evaluation suite for image editing and reward modeling. Edit-Compass contains 2,388 carefully annotated instances spanning six progressively challenging task categories, covering capabilities such as world knowledge reasoning, visual reasoning, and multi-image editing. Beyond broad task coverage, Edit-Compass adopts a fine-grained multidimensional evaluation framework based on structured reasoning and carefully designed scoring rubrics. In parallel, EditReward-Compass contains 2,251 preference pairs that simulate realistic reward modeling scenarios during RL optimization.","upvotes":30,"discussionId":"6a052a2db1a8cbabc9f08687","githubRepo":"https://github.com/bxhsort/Edit-Compass-and-EditReward-Compass","githubRepoAddedBy":"user","ai_summary":"Recent image editing models have achieved remarkable progress in instruction following, multimodal understanding, and complex visual editing. However, existing benchmarks often fail to faithfully reflect human judgment, especially for strong frontier models, due to limited task difficulty and coarse-grained evaluation protocols. In parallel, reward models have become increasingly important for RL-based image editing optimization, yet existing reward model benchmarks still rely on unrealistic evaluation settings that deviate from practical RL scenarios. These limitations hinder reliable assessment of both image editing models and reward models. To address these challenges, we introduce Edit-Compass and EditReward-Compass, a unified evaluation suite for image editing and reward modeling. Edit-Compass contains 2,388 carefully annotated instances spanning six progressively challenging task categories, covering capabilities such as world knowledge reasoning, visual reasoning, and multi-image editing. Beyond broad task coverage, Edit-Compass adopts a fine-grained multidimensional evaluation framework based on structured reasoning and carefully designed scoring rubrics. In parallel, EditReward-Compass contains 2,251 preference pairs that simulate realistic reward modeling scenarios during RL optimization.","ai_keywords":["image editing models","reward models","reinforcement learning","evaluation protocols","benchmarking","preference pairs","reward modeling scenarios","structured reasoning","scoring rubrics"],"githubStars":11},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"673c7319d11b1c2e246ead9c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg","isPro":false,"fullname":"Yang Shi","user":"DogNeverSleep","type":"user"},{"_id":"623d8ca4c29adf5ef6175615","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/623d8ca4c29adf5ef6175615/q7lHao7UPwU1u7YLSP56m.jpeg","isPro":false,"fullname":"Yi-Fan Zhang","user":"yifanzhang114","type":"user"},{"_id":"644d2532d185572dd1e48f90","avatarUrl":"/avatars/5831acebb02d8bc8f80f56b7b11c7c69.svg","isPro":false,"fullname":"Zhu","user":"zzzhu","type":"user"},{"_id":"674e77fa59a127e4eacf5dba","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/674e77fa59a127e4eacf5dba/W7qr94Buvvaio8zhKrEha.jpeg","isPro":false,"fullname":"Yifan Dai","user":"Moonwines","type":"user"},{"_id":"6700b2b6bff0e8b51d07fa00","avatarUrl":"/avatars/6cd7e243b7bc37ae9d308c175cbe6f05.svg","isPro":false,"fullname":"asdasd","user":"asdjghh","type":"user"},{"_id":"65e71ef39cf349af2940b317","avatarUrl":"/avatars/fc1cd8d3510946fc947d67b16b51834b.svg","isPro":false,"fullname":"Yuran Wang","user":"Ryann829","type":"user"},{"_id":"6426e712bc4f1d51f5483a7d","avatarUrl":"/avatars/3dc472f1173ed6237d7a13205721c0bd.svg","isPro":true,"fullname":"as","user":"sophiaa","type":"user"},{"_id":"66ea702f8230291463c938b0","avatarUrl":"/avatars/eae8164475c177ff69baa9e0a2b0320e.svg","isPro":false,"fullname":"moonbeams","user":"moonbeamsMIL","type":"user"},{"_id":"69fb1f5202b23e332f57643f","avatarUrl":"/avatars/d05068a2d972bb69798246218f14fc0f.svg","isPro":false,"fullname":"north","user":"north-jiafeng","type":"user"},{"_id":"66100bacac50abb8d56dece6","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/66100bacac50abb8d56dece6/fd-4VMpb_1nl903yAIK4K.jpeg","isPro":false,"fullname":"Ding Yue","user":"dingyue1011","type":"user"},{"_id":"66650d38b52f0890724f3b07","avatarUrl":"/avatars/c25a365bff4985ebb71c96dd097b804f.svg","isPro":false,"fullname":"Xinlong Chen","user":"XinlongChen","type":"user"},{"_id":"6753d7a233e094f843030cf1","avatarUrl":"/avatars/86398855cb089a40510cc2d18d8cab00.svg","isPro":false,"fullname":"Liu","user":"TengfeiLiuTengfeiLiu","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.13062.md"}">
Papers
arxiv:2605.13062

Edit-Compass & EditReward-Compass: A Unified Benchmark for Image Editing and Reward Modeling

Published on May 13
· Submitted by
Yang Shi
on May 14
Authors:
,
,
,
,
,

Abstract

Recent image editing models have achieved remarkable progress in instruction following, multimodal understanding, and complex visual editing. However, existing benchmarks often fail to faithfully reflect human judgment, especially for strong frontier models, due to limited task difficulty and coarse-grained evaluation protocols. In parallel, reward models have become increasingly important for RL-based image editing optimization, yet existing reward model benchmarks still rely on unrealistic evaluation settings that deviate from practical RL scenarios. These limitations hinder reliable assessment of both image editing models and reward models. To address these challenges, we introduce Edit-Compass and EditReward-Compass, a unified evaluation suite for image editing and reward modeling. Edit-Compass contains 2,388 carefully annotated instances spanning six progressively challenging task categories, covering capabilities such as world knowledge reasoning, visual reasoning, and multi-image editing. Beyond broad task coverage, Edit-Compass adopts a fine-grained multidimensional evaluation framework based on structured reasoning and carefully designed scoring rubrics. In parallel, EditReward-Compass contains 2,251 preference pairs that simulate realistic reward modeling scenarios during RL optimization.

AI-generated summary

Recent image editing models have achieved remarkable progress in instruction following, multimodal understanding, and complex visual editing. However, existing benchmarks often fail to faithfully reflect human judgment, especially for strong frontier models, due to limited task difficulty and coarse-grained evaluation protocols. In parallel, reward models have become increasingly important for RL-based image editing optimization, yet existing reward model benchmarks still rely on unrealistic evaluation settings that deviate from practical RL scenarios. These limitations hinder reliable assessment of both image editing models and reward models. To address these challenges, we introduce Edit-Compass and EditReward-Compass, a unified evaluation suite for image editing and reward modeling. Edit-Compass contains 2,388 carefully annotated instances spanning six progressively challenging task categories, covering capabilities such as world knowledge reasoning, visual reasoning, and multi-image editing. Beyond broad task coverage, Edit-Compass adopts a fine-grained multidimensional evaluation framework based on structured reasoning and carefully designed scoring rubrics. In parallel, EditReward-Compass contains 2,251 preference pairs that simulate realistic reward modeling scenarios during RL optimization.

Community

Paper author Paper submitter 1 day ago

Recent image editing models have achieved remarkable progress in instruction following, multimodal understanding, and complex visual editing. However, existing benchmarks often fail to faithfully reflect human judgment, especially for strong frontier models, due to limited task difficulty and coarse-grained evaluation protocols. In parallel, reward models have become increasingly important for RL-based image editing optimization, yet existing reward model benchmarks still rely on unrealistic evaluation settings that deviate from practical RL scenarios. These limitations hinder reliable assessment of both image editing models and reward models. To address these challenges, we introduce Edit-Compass and EditReward-Compass, a unified evaluation suite for image editing and reward modeling. Edit-Compass contains 2,388 carefully annotated instances spanning six progressively challenging task categories, covering capabilities such as world knowledge reasoning, visual reasoning, and multi-image editing. Beyond broad task coverage, Edit-Compass adopts a fine-grained multidimensional evaluation framework based on structured reasoning and carefully designed scoring rubrics. In parallel, EditReward-Compass contains 2,251 preference pairs that simulate realistic reward modeling scenarios during RL optimization.

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.13062
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.13062 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.13062 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.13062 in a Space README.md to link it from this page.

Collections including this paper 1

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers