Self-Pruned Key-Value Attention: Learning When to Write by Predicting Future Utility
Mirrored from arXiv — Machine Learning for archival readability. Support the source by reading on the original site.
Computer Science > Machine Learning
Title:Self-Pruned Key-Value Attention: Learning When to Write by Predicting Future Utility
Abstract:Under modern test-time compute and agentic paradigms, language models process ever-longer sequences. Efficient text generation with transformer architectures is increasingly constrained by the Key-Value cache memory footprint and bandwidth. To address this limitation, we introduce Self-Pruned Key-Value Attention (SP-KV), a mechanism designed to predict future KV utility in order to reduce the size of the long-term KV cache. This strategy operates at a fine granularity: a lightweight utility predictor scores each key-value pair, and while recent KVs are always available via a local window, older pairs are written in the cache and used in global attention only if their predicted utility surpasses a given threshold. The LLM and the utility predictor are trained jointly end-to-end exclusively through next-token prediction loss, and are adapted from pretrained LLM checkpoints.
Rather than enforcing a fixed compression ratio, SP-KV performs dynamic sparsification: the mechanism adapts to the input and typically reduces the KV cache size by a factor of $3$ to $10\times$, longer sequences often being more compressible. This leads to vast improvements in memory usage and decoding speed, with little to no degradation of validation loss nor performance on a broad set of downstream tasks. Beyond serving as an effective KV-cache reduction mechanism, our method reveals structured layer- and head-specific sparsity patterns that we can use to guide the design of hybrid local-global attention architectures.
| Comments: | 28 pages, 8 figures, 8 tables |
| Subjects: | Machine Learning (cs.LG); Computation and Language (cs.CL) |
| ACM classes: | I.2.6; I.2.7 |
| Cite as: | arXiv:2605.14037 [cs.LG] |
| (or arXiv:2605.14037v1 [cs.LG] for this version) | |
| https://doi.org/10.48550/arXiv.2605.14037
arXiv-issued DOI via DataCite (pending registration)
|
Access Paper:
- View PDF
- HTML (experimental)
- TeX Source
References & Citations
Bibliographic and Citation Tools
Code, Data and Media Associated with this Article
Demos
Recommenders and Search Tools
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
More from arXiv — Machine Learning
-
Vision-Based Runtime Monitoring under Varying Specifications using Semantic Latent Representations
May 15
-
Mechanistic Interpretability of EEG Foundation Models via Sparse Autoencoders
May 15
-
Rethinking Molecular OOD Generalization via Target-Aware Source Selection
May 15
-
Unsupervised learning of acquisition variability in structural connectomes via hybrid latent space modeling
May 15
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.