Hugging Face Daily Papers · · 3 min read

LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

project page: <a href=\"https://thrillcrazyer.github.io/LoopUS\" rel=\"nofollow\">https://thrillcrazyer.github.io/LoopUS</a></p>\n","updatedAt":"2026-05-13T02:20:40.968Z","author":{"_id":"64ba75761d0a5a5760874197","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64ba75761d0a5a5760874197/5V3jxX-vim-wMaU2Zjz5h.jpeg","fullname":"TaekHyunPark","name":"Thrillcrazyer","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":4,"isUserFollowing":false}},"numEdits":1,"identifiedLanguage":{"language":"en","probability":0.8645671606063843},"editors":["Thrillcrazyer"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/64ba75761d0a5a5760874197/5V3jxX-vim-wMaU2Zjz5h.jpeg"],"reactions":[],"isReport":false}},{"id":"6a0481e76ec5a78f4ad7a1ec","author":{"_id":"661ab1f1fa3b144a381fa454","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661ab1f1fa3b144a381fa454/IlpZBb9NCjo7ntFwMIH53.png","fullname":"Urro","name":"urroxyz","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":9,"isUserFollowing":false},"createdAt":"2026-05-13T13:51:35.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"I love this type of stuff.","html":"<p>I love this type of stuff.</p>\n","updatedAt":"2026-05-13T13:51:35.052Z","author":{"_id":"661ab1f1fa3b144a381fa454","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661ab1f1fa3b144a381fa454/IlpZBb9NCjo7ntFwMIH53.png","fullname":"Urro","name":"urroxyz","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":9,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8693033456802368},"editors":["urroxyz"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/661ab1f1fa3b144a381fa454/IlpZBb9NCjo7ntFwMIH53.png"],"reactions":[{"reaction":"❤️","users":["Thrillcrazyer"],"count":1}],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.11011","authors":[{"_id":"6a03d68086b054ce2fa40cf0","user":{"_id":"64ba75761d0a5a5760874197","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64ba75761d0a5a5760874197/5V3jxX-vim-wMaU2Zjz5h.jpeg","isPro":false,"fullname":"TaekHyunPark","user":"Thrillcrazyer","type":"user","name":"Thrillcrazyer"},"name":"Taekhyun Park","status":"claimed_verified","statusLastChangedAt":"2026-05-13T07:51:08.707Z","hidden":false},{"_id":"6a03d68086b054ce2fa40cf1","name":"Yongjae Lee","hidden":false},{"_id":"6a03d68086b054ce2fa40cf2","name":"Dohee Kim","hidden":false},{"_id":"6a03d68086b054ce2fa40cf3","name":"Hyerim Bae","hidden":false}],"publishedAt":"2026-05-10T00:00:00.000Z","submittedOnDailyAt":"2026-05-13T00:00:00.000Z","title":"LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models","submittedOnDailyBy":{"_id":"64ba75761d0a5a5760874197","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64ba75761d0a5a5760874197/5V3jxX-vim-wMaU2Zjz5h.jpeg","isPro":false,"fullname":"TaekHyunPark","user":"Thrillcrazyer","type":"user","name":"Thrillcrazyer"},"summary":"Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. However, existing approaches typically require either training recurrent models from scratch or applying disruptive retrofits, which involve substantial computational costs and may compromise pretrained capabilities. To address these limitations, we introduce Looped Depth Up-Scaling (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. As a key technical contribution, LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through four core components: (1) block decomposition, guided by staged representation dynamics; (2) an input-dependent selective gate to mitigate hidden-state drift; (3) random deep supervision for memory-efficient learning over long recursive horizons; and (4) a confidence head for adaptive early exiting. Collectively, these mechanisms transform a standard non-looped model into a looped form while stabilizing it against both computational bottlenecks and representation collapse. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch. For more details, see https://thrillcrazyer.github.io/LoopUS","upvotes":8,"discussionId":"6a03d68086b054ce2fa40cf4","projectPage":"https://thrillcrazyer.github.io/LoopUS","githubRepo":"https://github.com/Thrillcrazyer/LoopUS","githubRepoAddedBy":"user","ai_summary":"LoopUS is a post-training framework that transforms pretrained LLMs into looped architectures for improved reasoning performance through latent-refinement and adaptive early exiting mechanisms.","ai_keywords":["looped computation","LLMs","post-training framework","encoder","looped reasoning block","decoder","block decomposition","staged representation dynamics","input-dependent selective gate","hidden-state drift","random deep supervision","memory-efficient learning","confidence head","adaptive early exiting","latent-refinement architecture","representation collapse"],"githubStars":2,"organization":{"_id":"6902caeadf78e6ca12c2a398","name":"BAELABPNU","fullname":"BIGDATA ANALYTICS ENGINEERING LAB, Pusan National University, Busan, Korea","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/6902c9cb9427990a4948a33e/L59exY-PO66fQXk3lNwQ-.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"64ba75761d0a5a5760874197","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64ba75761d0a5a5760874197/5V3jxX-vim-wMaU2Zjz5h.jpeg","isPro":false,"fullname":"TaekHyunPark","user":"Thrillcrazyer","type":"user"},{"_id":"6902c9cb9427990a4948a33e","avatarUrl":"/avatars/b66536939cde8cc9a4877ded2d99434a.svg","isPro":false,"fullname":"Hyerim BAE","user":"hrbae","type":"user"},{"_id":"68c2af6e6f98def012e1c890","avatarUrl":"/avatars/55fe446471c3e70df16cd1bc44d95b04.svg","isPro":false,"fullname":"Kyunghun Lee","user":"DVDLuv","type":"user"},{"_id":"6a03dbaff6af1506edc14042","avatarUrl":"/avatars/81a0c9d695bc0165ba639042b730f00c.svg","isPro":false,"fullname":"KIMJONGMIN","user":"Zomezome","type":"user"},{"_id":"6902eb791cbf187eb732c7cc","avatarUrl":"/avatars/fbafb51f3e242ea72e607ca359850d6a.svg","isPro":false,"fullname":"SEO JUNHYUK","user":"junsj","type":"user"},{"_id":"697edf1faea9927d62855d2c","avatarUrl":"/avatars/15c9588429737287bdc7a452450919e0.svg","isPro":false,"fullname":"MinGyun Kang","user":"nneans","type":"user"},{"_id":"6902e7d99427990a494a9e05","avatarUrl":"/avatars/18fdaa0d48172124114ef3d74679926a.svg","isPro":false,"fullname":"somyeong kim","user":"somyeongkim","type":"user"},{"_id":"661ab1f1fa3b144a381fa454","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/661ab1f1fa3b144a381fa454/IlpZBb9NCjo7ntFwMIH53.png","isPro":true,"fullname":"Urro","user":"urroxyz","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"6902caeadf78e6ca12c2a398","name":"BAELABPNU","fullname":"BIGDATA ANALYTICS ENGINEERING LAB, Pusan National University, Busan, Korea","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/6902c9cb9427990a4948a33e/L59exY-PO66fQXk3lNwQ-.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.11011.md"}">
Papers
arxiv:2605.11011

LoopUS: Recasting Pretrained LLMs into Looped Latent Refinement Models

Authors:
,
,

Abstract

LoopUS is a post-training framework that transforms pretrained LLMs into looped architectures for improved reasoning performance through latent-refinement and adaptive early exiting mechanisms.

AI-generated summary

Looped computation shows promise in improving the reasoning-oriented performance of LLMs by scaling test-time compute. However, existing approaches typically require either training recurrent models from scratch or applying disruptive retrofits, which involve substantial computational costs and may compromise pretrained capabilities. To address these limitations, we introduce Looped Depth Up-Scaling (LoopUS), a post-training framework that converts a standard pretrained LLM into a looped architecture. As a key technical contribution, LoopUS recasts the pretrained LLM into an encoder, a looped reasoning block, and a decoder. It operationalizes this latent-refinement architecture through four core components: (1) block decomposition, guided by staged representation dynamics; (2) an input-dependent selective gate to mitigate hidden-state drift; (3) random deep supervision for memory-efficient learning over long recursive horizons; and (4) a confidence head for adaptive early exiting. Collectively, these mechanisms transform a standard non-looped model into a looped form while stabilizing it against both computational bottlenecks and representation collapse. Through stable latent looping, LoopUS improves reasoning-oriented performance without extending the generated traces or requiring recurrent training from scratch. For more details, see https://thrillcrazyer.github.io/LoopUS

Community

I love this type of stuff.

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.11011
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.11011 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.11011 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers