Does RAG Know When Retrieval Is Wrong? Diagnosing Context Compliance under Knowledge Conflict
Mirrored from arXiv — NLP / Computation & Language for archival readability. Support the source by reading on the original site.
Computer Science > Computation and Language
Title:Does RAG Know When Retrieval Is Wrong? Diagnosing Context Compliance under Knowledge Conflict
Abstract:The Context-Compliance Regime in Retrieval-Augmented Generation (RAG) occurs when retrieved context dominates
the final answer even when it conflicts with the model's parametric knowledge. Accuracy alone does not reveal
how retrieved context causally shapes answers under such conflict. We introduce Context-Driven Decomposition
(CDD), a belief-decomposition probe that operates at inference time and serves as an intervention mechanism for
controlled retrieval conflict. Across Epi-Scale stress tests, TruthfulQA misconception injection, and cross-
model reruns, CDD exposes three patterns. P1: context compliance is measurable in an upper-bound adversarial
setting, where Standard RAG reaches 15.0% accuracy on TruthfulQA misconception injection (N=500). P2:
adversarial accuracy gains transfer across model families: CDD improves accuracy on Gemini-2.5-Flash and on
Claude Haiku/Sonnet/Opus, but rationale-answer causal coupling does not transfer. CDD reaches 64.1% mistake-
injection causal sensitivity on Gemini-2.5-Flash, while sensitivities for all three Claude variants fall in the
[-3%, +7%] range, suggesting that the Claude-side accuracy gains operate through a mechanism distinct from the
explicit conflict-resolution trace. P3: explicit conflict decomposition improves robustness under temporal
drift and noisy distractors, with CDD reaching 71.3% on temporal shifts and 69.9% on distractor evidence on the
full Epi-Scale adversarial benchmark. These three patterns identify context-compliance as a structural axis
along which standard RAG can be probed and intervened on, distinct from retrieval-quality or single-method
robustness questions, and motivate releasing Epi-Scale for systematic study across model families and retrieval
pipelines.
| Comments: | 12 pages, 4 figures, 3 tables |
| Subjects: | Computation and Language (cs.CL); Artificial Intelligence (cs.AI) |
| Cite as: | arXiv:2605.14473 [cs.CL] |
| (or arXiv:2605.14473v1 [cs.CL] for this version) | |
| https://doi.org/10.48550/arXiv.2605.14473
arXiv-issued DOI via DataCite (pending registration)
|
Access Paper:
- View PDF
- HTML (experimental)
- TeX Source
References & Citations
Bibliographic and Citation Tools
Code, Data and Media Associated with this Article
Demos
Recommenders and Search Tools
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
More from arXiv — NLP / Computation & Language
-
Merging Methods for Multilingual Knowledge Editing for Large Language Models: An Empirical Odyssey
May 15
-
VectraYX-Nano: A 42M-Parameter Spanish Cybersecurity Language Model with Curriculum Learning and Native Tool Use
May 15
-
Mistletoe: Stealthy Acceleration-Collapse Attacks on Speculative Decoding
May 15
-
Physics-R1: An Audited Olympiad Corpus and Recipe for Visual Physics Reasoning
May 15
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.