T2I Preference Optimization</p>\n","updatedAt":"2026-05-14T07:53:55.333Z","author":{"_id":"6635b673e4028f5fd02b2131","avatarUrl":"/avatars/8c18cb14954fd2cd35f6975118471fba.svg","fullname":"YunhongLu","name":"JaydenLu666","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.48369091749191284},"editors":["JaydenLu666"],"editorAvatarUrls":["/avatars/8c18cb14954fd2cd35f6975118471fba.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.09433","authors":[{"_id":"6a057f24b1a8cbabc9f08967","name":"Yunhong Lu","hidden":false},{"_id":"6a057f24b1a8cbabc9f08968","name":"Qichao Wang","hidden":false},{"_id":"6a057f24b1a8cbabc9f08969","name":"Hengyuan Cao","hidden":false},{"_id":"6a057f24b1a8cbabc9f0896a","name":"Xiaoyin Xu","hidden":false},{"_id":"6a057f24b1a8cbabc9f0896b","name":"Min Zhang","hidden":false}],"publishedAt":"2026-05-10T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"Offline Preference Optimization for Rectified Flow with Noise-Tracked Pairs","submittedOnDailyBy":{"_id":"6635b673e4028f5fd02b2131","avatarUrl":"/avatars/8c18cb14954fd2cd35f6975118471fba.svg","isPro":false,"fullname":"YunhongLu","user":"JaydenLu666","type":"user","name":"JaydenLu666"},"summary":"Existing preference datasets for text-to-image models typically store only the final winner/loser images. This representation is insufficient for rectified flow (RF) models, whose generation is naturally indexed by a specific prior noise sample and follows a nearly straight denoising trajectory. In contrast, prior DPO-style alignment for diffusion models commonly estimates trajectories using an independent forward noising process, which can be mismatched to the true reverse dynamics and introduces unnecessary variance. We propose Prior Noise-Aware Preference Optimization (PNAPO), an off-policy alignment framework specialized for rectified flow. PNAPO augments preference data by retaining the paired prior noises used to generate each winner/loser image, turning the standard (prompt, winner, loser) triplet into a sextuple. Leveraging the straight-line property of RF, we estimate intermediate states via noise-image interpolation, which constrains the trajectory estimation space and yields a tighter surrogate objective for preference optimization. In addition, we introduce a dynamic regularization strategy that adapts the DPO regularization based on (i) the reward gap between winner and loser and (ii) training progress, improving stability and sample efficiency. Experiments on state-of-the-art RF T2I backbones show that PNAPO consistently improves preference metrics while substantially reducing training compute.","upvotes":6,"discussionId":"6a057f25b1a8cbabc9f0896c","ai_summary":"Rectified flow models require prior noise information for effective preference optimization, which PNAPO addresses by augmenting preference data with noise samples and employing dynamic regularization for improved training efficiency.","ai_keywords":["rectified flow","diffusion models","preference optimization","DPO","noise-image interpolation","trajectory estimation","reward gap","training progress","sample efficiency","computational efficiency"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"6635b673e4028f5fd02b2131","avatarUrl":"/avatars/8c18cb14954fd2cd35f6975118471fba.svg","isPro":false,"fullname":"YunhongLu","user":"JaydenLu666","type":"user"},{"_id":"680dc429f28beaaf26845e13","avatarUrl":"/avatars/c25ef5ac902406a37f07952a9d11c868.svg","isPro":false,"fullname":"qichao wang","user":"qichaowang","type":"user"},{"_id":"67e639a19310e4fdefebdf2c","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/snoVFf6SQX2SfwiEmDqqI.png","isPro":false,"fullname":"caohengyuan","user":"caohy666","type":"user"},{"_id":"63d4b843df01ef426a0f79fb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1676365795587-63d4b843df01ef426a0f79fb.jpeg","isPro":false,"fullname":"Yanhong Zeng","user":"zengyh1900","type":"user"},{"_id":"65b9f710e7c83813628a5cd0","avatarUrl":"/avatars/47075fb646359211b2abe601fa8156d5.svg","isPro":false,"fullname":"Yantai Yang","user":"yantaiyang05","type":"user"},{"_id":"676127cf11b19ea602bb202a","avatarUrl":"/avatars/dfd802a24bd63e509728159ebb1769f6.svg","isPro":false,"fullname":"Zhengxi Lu","user":"LZXzju","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.09433.md"}">
Offline Preference Optimization for Rectified Flow with Noise-Tracked Pairs
Abstract
Rectified flow models require prior noise information for effective preference optimization, which PNAPO addresses by augmenting preference data with noise samples and employing dynamic regularization for improved training efficiency.
AI-generated summary
Existing preference datasets for text-to-image models typically store only the final winner/loser images. This representation is insufficient for rectified flow (RF) models, whose generation is naturally indexed by a specific prior noise sample and follows a nearly straight denoising trajectory. In contrast, prior DPO-style alignment for diffusion models commonly estimates trajectories using an independent forward noising process, which can be mismatched to the true reverse dynamics and introduces unnecessary variance. We propose Prior Noise-Aware Preference Optimization (PNAPO), an off-policy alignment framework specialized for rectified flow. PNAPO augments preference data by retaining the paired prior noises used to generate each winner/loser image, turning the standard (prompt, winner, loser) triplet into a sextuple. Leveraging the straight-line property of RF, we estimate intermediate states via noise-image interpolation, which constrains the trajectory estimation space and yields a tighter surrogate objective for preference optimization. In addition, we introduce a dynamic regularization strategy that adapts the DPO regularization based on (i) the reward gap between winner and loser and (ii) training progress, improving stability and sample efficiency. Experiments on state-of-the-art RF T2I backbones show that PNAPO consistently improves preference metrics while substantially reducing training compute.
Community
T2I Preference Optimization
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.09433 in a model README.md to link it from this page.
Cite arxiv.org/abs/2605.09433 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.09433 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.