Hugging Face Daily Papers · · 4 min read

F-GRPO: Factorized Group-Relative Policy Optimization for Unified Candidate Generation and Ranking

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

F-GRPO: Factorized Group-Relative Policy Optimization for Unified Candidate Generation and Ranking</p>\n","updatedAt":"2026-05-14T06:02:05.532Z","author":{"_id":"6620978a2fbb68cdc361725c","avatarUrl":"/avatars/0a6844c8fb1ad18461686dab412584c3.svg","fullname":"Rohan Surana","name":"rohan2810","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.6516232490539551},"editors":["rohan2810"],"editorAvatarUrls":["/avatars/0a6844c8fb1ad18461686dab412584c3.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.12995","authors":[{"_id":"6a056515b1a8cbabc9f08903","name":"Rohan Surana","hidden":false},{"_id":"6a056515b1a8cbabc9f08904","name":"Gagan Mundada","hidden":false},{"_id":"6a056515b1a8cbabc9f08905","name":"Junda Wu","hidden":false},{"_id":"6a056515b1a8cbabc9f08906","name":"Xintong Li","hidden":false},{"_id":"6a056515b1a8cbabc9f08907","name":"Yizhu Jiao","hidden":false},{"_id":"6a056515b1a8cbabc9f08908","name":"Bowen Jin","hidden":false},{"_id":"6a056515b1a8cbabc9f08909","name":"Sizhe Zhou","hidden":false},{"_id":"6a056515b1a8cbabc9f0890a","name":"Tong Yu","hidden":false},{"_id":"6a056515b1a8cbabc9f0890b","name":"Ritwik Sinha","hidden":false},{"_id":"6a056515b1a8cbabc9f0890c","name":"Jiawei Han","hidden":false},{"_id":"6a056515b1a8cbabc9f0890d","name":"Jingbo Shang","hidden":false},{"_id":"6a056515b1a8cbabc9f0890e","name":"Julian McAuley","hidden":false}],"publishedAt":"2026-05-13T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"F-GRPO: Factorized Group-Relative Policy Optimization for Unified Candidate Generation and Ranking","submittedOnDailyBy":{"_id":"6620978a2fbb68cdc361725c","avatarUrl":"/avatars/0a6844c8fb1ad18461686dab412584c3.svg","isPro":false,"fullname":"Rohan Surana","user":"rohan2810","type":"user","name":"rohan2810"},"summary":"Traditional retrieval pipelines optimize utility through stages of candidate retrieval and reranking, where ranking operates over a predefined candidate set. Large Language Models (LLMs) broaden this into a generative process: given a candidate pool, an LLM can generate a subset and order it within a single autoregressive pass. However, this flexibility introduces a new optimization challenge: the model must search a combinatorial output space while receiving utility feedback only after the full ranked list is generated. Because this feedback is defined over the completed sequence, it cannot distinguish whether a poor result arises from failing to generate a relevant subset or from failing to rank that subset correctly. This credit assignment gap makes end-to-end optimization unstable and sample-inefficient. Existing systems often address this by separating candidate generation from ranking. However, such decoupling remains misaligned with downstream utility because ranking is limited by the candidate set it receives. To bridge this gap, we propose a unified framework that performs both within a single autoregressive rollout and optimizes them end-to-end via factorized group-relative policy optimization (F-GRPO). Our framework factorizes the policy into candidate generation and ranking while sharing a single LLM backbone, and jointly trains them with an order-invariant coverage reward and a position-aware utility reward. To address the resulting phase-specific credit assignment problem, we use separate group-relative advantages for generation and ranking within a two-phase sequence-level objective. Across sequential recommendation and multi-hop question answering benchmarks, F-GRPO improves top-ranked performance over GRPO and decoupled baselines, outperforms supervised alternatives, and remains competitive with strong zero-shot rerankers, with no architectural changes at inference time.","upvotes":1,"discussionId":"6a056516b1a8cbabc9f0890f","ai_summary":"A unified framework combines candidate generation and ranking in a single autoregressive model using factorized group-relative policy optimization to address credit assignment challenges in end-to-end retrieval optimization.","ai_keywords":["large language models","autoregressive pass","combinatorial output space","credit assignment gap","end-to-end optimization","candidate generation","ranking","factorized group-relative policy optimization","F-GRPO","order-invariant coverage reward","position-aware utility reward","two-phase sequence-level objective"],"organization":{"_id":"65af44a1637e10fba942ed0c","name":"McAuley-Lab","fullname":"McAuley-Lab","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/64daab70c38427829daf5958/OWlh6vciWnY_MeyM099wZ.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"69a40ace70ec6f15c6599367","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/mZtoFMWocQ5n6dHb1N5Gu.png","isPro":false,"fullname":"孙 奕辰","user":"isabellat76","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"65af44a1637e10fba942ed0c","name":"McAuley-Lab","fullname":"McAuley-Lab","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/64daab70c38427829daf5958/OWlh6vciWnY_MeyM099wZ.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.12995.md"}">
Papers
arxiv:2605.12995

F-GRPO: Factorized Group-Relative Policy Optimization for Unified Candidate Generation and Ranking

Published on May 13
· Submitted by
Rohan Surana
on May 14
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

A unified framework combines candidate generation and ranking in a single autoregressive model using factorized group-relative policy optimization to address credit assignment challenges in end-to-end retrieval optimization.

AI-generated summary

Traditional retrieval pipelines optimize utility through stages of candidate retrieval and reranking, where ranking operates over a predefined candidate set. Large Language Models (LLMs) broaden this into a generative process: given a candidate pool, an LLM can generate a subset and order it within a single autoregressive pass. However, this flexibility introduces a new optimization challenge: the model must search a combinatorial output space while receiving utility feedback only after the full ranked list is generated. Because this feedback is defined over the completed sequence, it cannot distinguish whether a poor result arises from failing to generate a relevant subset or from failing to rank that subset correctly. This credit assignment gap makes end-to-end optimization unstable and sample-inefficient. Existing systems often address this by separating candidate generation from ranking. However, such decoupling remains misaligned with downstream utility because ranking is limited by the candidate set it receives. To bridge this gap, we propose a unified framework that performs both within a single autoregressive rollout and optimizes them end-to-end via factorized group-relative policy optimization (F-GRPO). Our framework factorizes the policy into candidate generation and ranking while sharing a single LLM backbone, and jointly trains them with an order-invariant coverage reward and a position-aware utility reward. To address the resulting phase-specific credit assignment problem, we use separate group-relative advantages for generation and ranking within a two-phase sequence-level objective. Across sequential recommendation and multi-hop question answering benchmarks, F-GRPO improves top-ranked performance over GRPO and decoupled baselines, outperforms supervised alternatives, and remains competitive with strong zero-shot rerankers, with no architectural changes at inference time.

Community

Paper submitter about 20 hours ago

F-GRPO: Factorized Group-Relative Policy Optimization for Unified Candidate Generation and Ranking

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.12995
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.12995 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.12995 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.12995 in a Space README.md to link it from this page.

Collections including this paper 1

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers