This paper studies why extremely large activations suddenly appear inside large language models. We identify a specific layer, called the Massive Emergence (ME) Layer, where these activations are first generated and then propagated through the network. We show that these dominant activations make the model’s internal representations overly rigid and contribute to attention sink behavior. To address this issue, we propose a simple method called WeMask, which selectively suppresses overly dominant dimensions in the hidden states. Our method consistently improves performance on instruction following, mathematical reasoning, and safety alignment tasks across both training-free and fine-tuning settings.</p>\n","updatedAt":"2026-05-13T17:56:33.346Z","author":{"_id":"66b82fdfcaadc51a3e9a32e1","avatarUrl":"/avatars/d91377d0e1ec85b5a2efa43bf161dfc6.svg","fullname":"Zeru Shi","name":"DarkBluee","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9046074151992798},"editors":["DarkBluee"],"editorAvatarUrls":["/avatars/d91377d0e1ec85b5a2efa43bf161dfc6.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.08504","authors":[{"_id":"6a029903b823258e761233fd","user":{"_id":"66b82fdfcaadc51a3e9a32e1","avatarUrl":"/avatars/d91377d0e1ec85b5a2efa43bf161dfc6.svg","isPro":false,"fullname":"Zeru Shi","user":"DarkBluee","type":"user","name":"DarkBluee"},"name":"Zeru Shi","status":"claimed_verified","statusLastChangedAt":"2026-05-13T07:54:07.962Z","hidden":false},{"_id":"6a029903b823258e761233fe","name":"Zhenting Wang","hidden":false},{"_id":"6a029903b823258e761233ff","name":"Fan Yang","hidden":false},{"_id":"6a029903b823258e76123400","name":"Qifan Wang","hidden":false},{"_id":"6a029903b823258e76123401","name":"Ruixiang Tang","hidden":false}],"publishedAt":"2026-05-08T00:00:00.000Z","submittedOnDailyAt":"2026-05-13T00:00:00.000Z","title":"A Single Layer to Explain Them All:Understanding Massive Activations in Large Language Models","submittedOnDailyBy":{"_id":"66b82fdfcaadc51a3e9a32e1","avatarUrl":"/avatars/d91377d0e1ec85b5a2efa43bf161dfc6.svg","isPro":false,"fullname":"Zeru Shi","user":"DarkBluee","type":"user","name":"DarkBluee"},"summary":"We investigate the origins of massive activations in large language models (LLMs) and identify a specific layer named the Massive Emergence Layer (ME Layer), that is consistently observed across model families, where massive activations first emerge and subsequently propagate to deeper layers through residual connections. We show that, within the ME Layer both the RMSNorm and the FFN parameters jointly contribute to the emergence of massive activations. Once formed, the massive activation token representation remains largely invariant across layers, reducing the diversity of hidden representations passed to the attention module. Motivated by this limitation, we propose a simple and effective method to reduce the rigidity of the massive activation token. Our approach consistently improves LLM performance across multiple tasks, including instruction following and math reasoning, in both training free and fine tuning settings. Moreover, we show that our method mitigates attention sinks by selectively weakening their influence, elucidating their origin at the hidden state level and shedding new light on principled mitigation strategies.","upvotes":5,"discussionId":"6a029904b823258e76123402","projectPage":"https://vanpe20.github.io/ME-Layer.github.io/","ai_summary":"Massive activation emergence in LLMs occurs consistently across model families at a specific layer, where RMSNorm and FFN parameters jointly contribute, leading to reduced hidden representation diversity that can be mitigated through a proposed method improving performance across tasks.","ai_keywords":["Massive Emergence Layer","RMSNorm","FFN","residual connections","attention sinks"],"organization":{"_id":"69cb3e878aafeb00b1e64143","name":"RutgersU","fullname":"Rutgers University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/69cb3e2fa6964a91fe8c2c97/1SJLGjmcRZ5xHSjK_K6dW.png"}},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"66b82fdfcaadc51a3e9a32e1","avatarUrl":"/avatars/d91377d0e1ec85b5a2efa43bf161dfc6.svg","isPro":false,"fullname":"Zeru Shi","user":"DarkBluee","type":"user"},{"_id":"68c4e7907daa73025f2b15ae","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/oabWp6ggSVwn5tLP4IMdz.jpeg","isPro":false,"fullname":"Boxuan Zhang","user":"ZBox008003","type":"user"},{"_id":"68da9ca61e61717a96d9894a","avatarUrl":"/avatars/f8f33d5639539969daff359cdf5b6469.svg","isPro":true,"fullname":"Yihao Quan","user":"0x33B","type":"user"},{"_id":"68dd767e979b0b6f619abde5","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/EJbzCNhl9Aubmq_MtJWPc.png","isPro":false,"fullname":"ryan tang","user":"rrrrrrrrrrrrryan","type":"user"},{"_id":"69faaaa271e53bfef6ea55b2","avatarUrl":"/avatars/0a1dd3ed631dc3587f2df8fb56cc2a08.svg","isPro":false,"fullname":"Peter Palmtree","user":"goatofmendesa","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"organization":{"_id":"69cb3e878aafeb00b1e64143","name":"RutgersU","fullname":"Rutgers University","avatar":"https://cdn-avatars.huggingface.co/v1/production/uploads/69cb3e2fa6964a91fe8c2c97/1SJLGjmcRZ5xHSjK_K6dW.png"},"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.08504.md"}">
A Single Layer to Explain Them All:Understanding Massive Activations in Large Language Models
Abstract
Massive activation emergence in LLMs occurs consistently across model families at a specific layer, where RMSNorm and FFN parameters jointly contribute, leading to reduced hidden representation diversity that can be mitigated through a proposed method improving performance across tasks.
AI-generated summary
We investigate the origins of massive activations in large language models (LLMs) and identify a specific layer named the Massive Emergence Layer (ME Layer), that is consistently observed across model families, where massive activations first emerge and subsequently propagate to deeper layers through residual connections. We show that, within the ME Layer both the RMSNorm and the FFN parameters jointly contribute to the emergence of massive activations. Once formed, the massive activation token representation remains largely invariant across layers, reducing the diversity of hidden representations passed to the attention module. Motivated by this limitation, we propose a simple and effective method to reduce the rigidity of the massive activation token. Our approach consistently improves LLM performance across multiple tasks, including instruction following and math reasoning, in both training free and fine tuning settings. Moreover, we show that our method mitigates attention sinks by selectively weakening their influence, elucidating their origin at the hidden state level and shedding new light on principled mitigation strategies.
Community
This paper studies why extremely large activations suddenly appear inside large language models. We identify a specific layer, called the Massive Emergence (ME) Layer, where these activations are first generated and then propagated through the network. We show that these dominant activations make the model’s internal representations overly rigid and contribute to attention sink behavior. To address this issue, we propose a simple method called WeMask, which selectively suppresses overly dominant dimensions in the hidden states. Our method consistently improves performance on instruction following, mathematical reasoning, and safety alignment tasks across both training-free and fine-tuning settings.
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.08504 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.08504 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.