Hugging Face Daily Papers · · 4 min read

MemReread: Enhancing Agentic Long-Context Reasoning via Memory-Guided Rereading

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

\n\t<a id=\"📖-overview\" class=\"block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full\" href=\"#📖-overview\" rel=\"nofollow\">\n\t\t<span class=\"header-link\"><svg class=\"text-gray-500 hover:text-black dark:hover:text-gray-200 w-4\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" aria-hidden=\"true\" role=\"img\" width=\"1em\" height=\"1em\" preserveAspectRatio=\"xMidYMid meet\" viewBox=\"0 0 256 256\"><path d=\"M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z\" fill=\"currentColor\"></path></svg></span>\n\t</a>\n\t<span>\n\t\t📖 Overview\n\t</span>\n</h2>\n<p><strong>MemReread</strong> is a memory-guided LLM agent that decomposes the task to isolate its highest-priority sub-question based on its memory, then performs rereading guided by the generated sub-question, and directly answers according to the sub-memory, finally updating the root memory with the sub-question-answer pair. This process continues until the memory contains sufficient evidence to reach the answer.</p>\n","updatedAt":"2026-05-14T05:39:37.133Z","author":{"_id":"65731fc31345577b7071d7df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65731fc31345577b7071d7df/T8qnqIxnycvy1AP8CKAPb.png","fullname":"Baibei Ji","name":"iiiiGray","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":3,"identifiedLanguage":{"language":"en","probability":0.8146012425422668},"editors":["iiiiGray"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/65731fc31345577b7071d7df/T8qnqIxnycvy1AP8CKAPb.png"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.10268","authors":[{"_id":"6a055b17b1a8cbabc9f088cb","user":{"_id":"65731fc31345577b7071d7df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65731fc31345577b7071d7df/T8qnqIxnycvy1AP8CKAPb.png","isPro":false,"fullname":"Baibei Ji","user":"iiiiGray","type":"user","name":"iiiiGray"},"name":"Baibei Ji","status":"claimed_verified","statusLastChangedAt":"2026-05-14T10:54:30.433Z","hidden":false},{"_id":"6a055b17b1a8cbabc9f088cc","name":"Xiaoyang Weng","hidden":false},{"_id":"6a055b17b1a8cbabc9f088cd","name":"Juntao Li","hidden":false},{"_id":"6a055b17b1a8cbabc9f088ce","name":"Zecheng Tang","hidden":false},{"_id":"6a055b17b1a8cbabc9f088cf","name":"Yihang Lou","hidden":false},{"_id":"6a055b17b1a8cbabc9f088d0","name":"Min Zhang","hidden":false}],"publishedAt":"2026-05-11T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"MemReread: Enhancing Agentic Long-Context Reasoning via Memory-Guided Rereading","submittedOnDailyBy":{"_id":"65731fc31345577b7071d7df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65731fc31345577b7071d7df/T8qnqIxnycvy1AP8CKAPb.png","isPro":false,"fullname":"Baibei Ji","user":"iiiiGray","type":"user","name":"iiiiGray"},"summary":"To tackle long-context reasoning tasks without the quadratic complexity of standard attention mechanisms, approaches based on agent memory have emerged, which typically maintain a dynamically updated memory when linearly processing document chunks. To mitigate the potential loss of latent evidence in this memorize-while-reading paradigm, recent works have integrated retrieval modules that allow agents to recall information previously discarded during memory overwriting. However, retrieval-based recall suffers from both evidence loss during memory formation and interference induced by invalid queries. To overcome these limitations, we propose MemReread. Built upon streaming reading, MemReread circumvents intermediate retrieval. It triggers question decomposition and rereading when the final memory is insufficient, enabling the recovery of indirect facts that were prematurely discarded. This design supports non-linear reasoning while preserving the inherent logical flow of document comprehension. To further enhance practicality, we introduce a reinforcement learning framework that enhances length extrapolation capability while dynamically determining the number of rereading passes based on task complexity, thereby flexibly controlling computational overhead. Extensive experiments demonstrate that MemReread consistently outperforms baseline frameworks on long-context reasoning tasks, while maintaining linear time complexity with respect to context length.","upvotes":2,"discussionId":"6a055b18b1a8cbabc9f088d1","githubRepo":"https://github.com/iiGray/MemReread","githubRepoAddedBy":"user","ai_summary":"MemReread addresses long-context reasoning challenges by avoiding intermediate retrieval and employing question decomposition with rereading to recover discarded information, maintaining linear time complexity.","ai_keywords":["long-context reasoning","attention mechanisms","agent memory","memory overwriting","retrieval modules","question decomposition","rereading","reinforcement learning","length extrapolation","streaming reading"],"githubStars":1,"organization":{"_id":"61f8e653129c9ff1b911293d","name":"SUDA","fullname":"Soochow University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/1643701821817-61f8e5934a8e5a275b2b3e5a.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"65731fc31345577b7071d7df","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/65731fc31345577b7071d7df/T8qnqIxnycvy1AP8CKAPb.png","isPro":false,"fullname":"Baibei Ji","user":"iiiiGray","type":"user"},{"_id":"69bceeb1b0b4d685f7c228c2","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Dym6O8ZzdYODvZOkvHTKh.png","isPro":false,"fullname":"GAO Siyu","user":"zhu-jingyi8","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"61f8e653129c9ff1b911293d","name":"SUDA","fullname":"Soochow University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/1643701821817-61f8e5934a8e5a275b2b3e5a.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.10268.md"}">
Papers
arxiv:2605.10268

MemReread: Enhancing Agentic Long-Context Reasoning via Memory-Guided Rereading

Published on May 11
· Submitted by
Baibei Ji
on May 14
Authors:
,
,
,
,

Abstract

MemReread addresses long-context reasoning challenges by avoiding intermediate retrieval and employing question decomposition with rereading to recover discarded information, maintaining linear time complexity.

AI-generated summary

To tackle long-context reasoning tasks without the quadratic complexity of standard attention mechanisms, approaches based on agent memory have emerged, which typically maintain a dynamically updated memory when linearly processing document chunks. To mitigate the potential loss of latent evidence in this memorize-while-reading paradigm, recent works have integrated retrieval modules that allow agents to recall information previously discarded during memory overwriting. However, retrieval-based recall suffers from both evidence loss during memory formation and interference induced by invalid queries. To overcome these limitations, we propose MemReread. Built upon streaming reading, MemReread circumvents intermediate retrieval. It triggers question decomposition and rereading when the final memory is insufficient, enabling the recovery of indirect facts that were prematurely discarded. This design supports non-linear reasoning while preserving the inherent logical flow of document comprehension. To further enhance practicality, we introduce a reinforcement learning framework that enhances length extrapolation capability while dynamically determining the number of rereading passes based on task complexity, thereby flexibly controlling computational overhead. Extensive experiments demonstrate that MemReread consistently outperforms baseline frameworks on long-context reasoning tasks, while maintaining linear time complexity with respect to context length.

Community

Paper author Paper submitter about 21 hours ago
edited about 20 hours ago

📖 Overview

MemReread is a memory-guided LLM agent that decomposes the task to isolate its highest-priority sub-question based on its memory, then performs rereading guided by the generated sub-question, and directly answers according to the sub-memory, finally updating the root memory with the sub-question-answer pair. This process continues until the memory contains sufficient evidence to reach the answer.

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.10268
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.10268 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.10268 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.10268 in a Space README.md to link it from this page.

Collections including this paper 1

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers