Hugging Face Daily Papers · · 4 min read

A Causal Language Modeling Detour Improves Encoder Continued Pretraining

Mirrored from Hugging Face Daily Papers for archival readability. Support the source by reading on the original site.

Hi <span class=\"SVELTE_PARTIAL_HYDRATER contents\" data-target=\"UserMention\" data-props=\"{&quot;user&quot;:&quot;rntc&quot;}\"><span class=\"inline-block\"><span class=\"contents\"><a href=\"/rntc\">@<span class=\"underline\">rntc</span></a></span> </span></span> , very cool idea! Do you btw. Plan to release the code, I would like to try this with other models for domain adaption 😃 </p>\n","updatedAt":"2026-05-13T05:59:32.342Z","author":{"_id":"5e6a3d4ea9afd5125d9ec064","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg","fullname":"Stefan Schweter","name":"stefan-it","type":"user","isPro":true,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":3898,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9622147679328918},"editors":["stefan-it"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg"],"reactions":[],"isReport":false},"replies":[{"id":"6a0441d041063270f3f0f6ee","author":{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","fullname":"Rian Touchent","name":"rntc","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":18,"isUserFollowing":false},"createdAt":"2026-05-13T09:18:08.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Hi @stefan-it thank you very much! I will try to release it asap, until then, the code is a modification of this very cool codebase you can find here: https://github.com/JHU-CLSP/ettin-encoder-vs-decoder by @orionweller et al.","html":"<p>Hi <span class=\"SVELTE_PARTIAL_HYDRATER contents\" data-target=\"UserMention\" data-props=\"{&quot;user&quot;:&quot;stefan-it&quot;}\"><span class=\"inline-block\"><span class=\"contents\"><a href=\"/stefan-it\">@<span class=\"underline\">stefan-it</span></a></span> </span></span> thank you very much! I will try to release it asap, until then, the code is a modification of this very cool codebase you can find here: <a href=\"https://github.com/JHU-CLSP/ettin-encoder-vs-decoder\" rel=\"nofollow\">https://github.com/JHU-CLSP/ettin-encoder-vs-decoder</a> by <span class=\"SVELTE_PARTIAL_HYDRATER contents\" data-target=\"UserMention\" data-props=\"{&quot;user&quot;:&quot;orionweller&quot;}\"><span class=\"inline-block\"><span class=\"contents\"><a href=\"/orionweller\">@<span class=\"underline\">orionweller</span></a></span> </span></span> et al.</p>\n","updatedAt":"2026-05-13T09:18:08.916Z","author":{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","fullname":"Rian Touchent","name":"rntc","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":18,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.8696683645248413},"editors":["rntc"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg"],"reactions":[{"reaction":"❤️","users":["stefan-it"],"count":1}],"isReport":false,"parentCommentId":"6a041344e4866fb99aab1c4c"}}]},{"id":"6a0438c8b2e94ef67096345c","author":{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","fullname":"Rian Touchent","name":"rntc","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":18,"isUserFollowing":false},"createdAt":"2026-05-13T08:39:36.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Release of ModernBERT-bio and ModernCamemBERT-bio","html":"<p>Release of ModernBERT-bio and ModernCamemBERT-bio</p>\n","updatedAt":"2026-05-13T08:39:36.645Z","author":{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","fullname":"Rian Touchent","name":"rntc","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"followerCount":18,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"es","probability":0.1880321055650711},"editors":["rntc"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.12438","authors":[{"_id":"6a04121f86b054ce2fa40fad","name":"Rian Touchent","hidden":false},{"_id":"6a04121f86b054ce2fa40fae","name":"Eric de la Clergerie","hidden":false}],"publishedAt":"2026-05-12T00:00:00.000Z","submittedOnDailyAt":"2026-05-13T00:00:00.000Z","title":"A Causal Language Modeling Detour Improves Encoder Continued Pretraining","submittedOnDailyBy":{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","isPro":false,"fullname":"Rian Touchent","user":"rntc","type":"user","name":"rntc"},"summary":"When adapting an encoder to a new domain, the standard approach is to continue training with Masked Language Modeling (MLM). We show that temporarily switching to Causal Language Modeling (CLM) followed by a short MLM decay improves downstream performance. On biomedical texts with ModernBERT, this CLM detour outperforms MLM baselines trained on identical data and compute across 8 French and 11 English biomedical tasks, by +1.2-2.8pp and +0.3-0.8pp respectively, depending on model size. We investigate the reasons for these gains. We find that CLM's dense supervision impacts low transformer layers (0-7) far more than MLM does. Freezing low layers during CLM eliminates the downstream benefit; freezing mid layers preserves it. The representational changes persist through the MLM decay phase, even when it matches the CLM phase in length, and they scale with model capacity. We release ModernCamemBERT-bio and ModernBERT-bio as state-of-the-art biomedical encoders in Base and Large sizes.","upvotes":4,"discussionId":"6a04121f86b054ce2fa40faf","ai_summary":"Switching from Masked Language Modeling to Causal Language Modeling during encoder adaptation improves downstream performance on biomedical texts through dense supervision effects in lower transformer layers.","ai_keywords":["Masked Language Modeling","Causal Language Modeling","transformer layers","downstream performance","representational changes","model capacity"],"organization":{"_id":"602ba30dc4f8038e9a1e0a60","name":"almanach","fullname":"ALMAnaCH (Inria)","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/1613472488646-602ba2a739515f8d31237967.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"5e6a3d4ea9afd5125d9ec064","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1584020801691-noauth.jpeg","isPro":true,"fullname":"Stefan Schweter","user":"stefan-it","type":"user"},{"_id":"62a9b0acf6708cb85014f9dc","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62a9b0acf6708cb85014f9dc/Sem1qcBt1lJjFEPK-xz4_.jpeg","isPro":false,"fullname":"Rian Touchent","user":"rntc","type":"user"},{"_id":"6317233cc92fd6fee317e030","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png","isPro":false,"fullname":"Tom Aarsen","user":"tomaarsen","type":"user"},{"_id":"62be186a5f59ff2320e6e32b","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/W_emoC2uItM-MJZyCfIKI.png","isPro":false,"fullname":"Nicolas-BZRD","user":"Nicolas-BZRD","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"602ba30dc4f8038e9a1e0a60","name":"almanach","fullname":"ALMAnaCH (Inria)","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/1613472488646-602ba2a739515f8d31237967.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.12438.md"}">
Papers
arxiv:2605.12438

A Causal Language Modeling Detour Improves Encoder Continued Pretraining

Published on May 12
· Submitted by
Rian Touchent
on May 13
Authors:
,

Abstract

Switching from Masked Language Modeling to Causal Language Modeling during encoder adaptation improves downstream performance on biomedical texts through dense supervision effects in lower transformer layers.

AI-generated summary

When adapting an encoder to a new domain, the standard approach is to continue training with Masked Language Modeling (MLM). We show that temporarily switching to Causal Language Modeling (CLM) followed by a short MLM decay improves downstream performance. On biomedical texts with ModernBERT, this CLM detour outperforms MLM baselines trained on identical data and compute across 8 French and 11 English biomedical tasks, by +1.2-2.8pp and +0.3-0.8pp respectively, depending on model size. We investigate the reasons for these gains. We find that CLM's dense supervision impacts low transformer layers (0-7) far more than MLM does. Freezing low layers during CLM eliminates the downstream benefit; freezing mid layers preserves it. The representational changes persist through the MLM decay phase, even when it matches the CLM phase in length, and they scale with model capacity. We release ModernCamemBERT-bio and ModernBERT-bio as state-of-the-art biomedical encoders in Base and Large sizes.

Community

Hi @rntc , very cool idea! Do you btw. Plan to release the code, I would like to try this with other models for domain adaption 😃

·

Hi @stefan-it thank you very much! I will try to release it asap, until then, the code is a modification of this very cool codebase you can find here: https://github.com/JHU-CLSP/ettin-encoder-vs-decoder by @orionweller et al.

Paper submitter about 12 hours ago

Release of ModernBERT-bio and ModernCamemBERT-bio

Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images

· Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.12438
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.12438 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.12438 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from Hugging Face Daily Papers