Uncovering Context Reliance in Unstructured Knowledge Editing
- URL: http://arxiv.org/abs/2602.19043v1
- Date: Sun, 22 Feb 2026 04:44:34 GMT
- Title: Uncovering Context Reliance in Unstructured Knowledge Editing
- Authors: Zisheng Zhou, Mengqi Zhang, Shiguang Wu, Xiaotian Ye, Chi Zhang, Zhumin Chen, Pengjie Ren,
- Abstract summary: We identify Context Reliance as a critical failure mode of NTP-based approaches.<n>We propose a simple yet effective COntext-INdependent editing framework (COIN)<n>COIN reduces Context Reliance by 45.2% and outperforms strong baselines by 23.6% in editing success rate.
- Score: 33.04965014969181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Editing Large language models (LLMs) with real-world, unstructured knowledge is essential for correcting and updating their internal parametric knowledge. In this work, we revisit the fundamental next-token prediction (NTP) as a candidate paradigm for unstructured editing. We identify Context Reliance as a critical failure mode of NTP-based approaches, where knowledge acquired from edited text becomes highly dependent on its preceding context, leading to recall failures when that context is absent during inference. This hypothesis is supported by our empirical validation that prepending context during inference recovers knowledge recall. We further theoretically demonstrate that Context Reliance is an inherent consequence of gradient-based optimization, which tends to bind acquired knowledge to a specific aggregated contextual representation. To address this, we propose a simple yet effective COntext-INdependent editing framework (COIN), encouraging model to focus on knowledge within local scope rather than memorizing contextual patterns. Evaluations show that COIN reduces Context Reliance by 45.2% and outperforms strong baselines by 23.6% in editing success rate, highlighting the vital role of mitigating Context Reliance for robust editing.
Related papers
- When Does Context Help? Error Dynamics of Contextual Information in Large Language Models [64.88201012057822]
We present a unified theoretical framework for analyzing the effect of arbitrary contextual information in large language models.<n>Our analysis characterizes contextual influence through output error dynamics.<n> Experiments across ICL, retrieval-augmented generation, and memory evolution validate our theory and motivate a principled context selection strategy.
arXiv Detail & Related papers (2026-02-09T05:58:41Z) - EtCon: Edit-then-Consolidate for Reliable Knowledge Editing [85.20993502078899]
We propose Edit-then-Consolidate, a novel knowledge editing paradigm that aims to bridge the gap between theoretical knowledge editing methods and their real-world applicability.<n>Our framework consistently improves editing reliability and generalization under real-world evaluations, while better preserving locality and pre-trained capabilities.
arXiv Detail & Related papers (2025-12-04T12:43:50Z) - Grounding Long-Context Reasoning with Contextual Normalization for Retrieval-Augmented Generation [57.97548022208733]
We show that seemingly superficial choices in key-value extraction can induce shifts in accuracy and stability.<n>We introduce Contextual Normalization, a strategy that adaptively standardizes context representations before generation.
arXiv Detail & Related papers (2025-10-15T06:28:25Z) - Context-Robust Knowledge Editing for Language Models [10.634048842551662]
We develop CHED, a benchmark designed to evaluate the context robustness of knowledge editing methods.<n> Evaluations on CHED show that they often fail when preceding contexts are present.<n>We introduce CoRE, a KE method designed to strengthen context robustness.
arXiv Detail & Related papers (2025-05-29T03:11:53Z) - On the Loss of Context-awareness in General Instruction Fine-tuning [101.03941308894191]
We investigate the loss of context awareness after supervised fine-tuning.<n>We find that the performance decline is associated with a bias toward different roles learned during conversational instruction fine-tuning.<n>We propose a metric to identify context-dependent examples from general instruction fine-tuning datasets.
arXiv Detail & Related papers (2024-11-05T00:16:01Z) - Robust and Scalable Model Editing for Large Language Models [75.95623066605259]
We propose EREN (Edit models by REading Notes) to improve the scalability and robustness of LLM editing.
Unlike existing techniques, it can integrate knowledge from multiple edits, and correctly respond to syntactically similar but semantically unrelated inputs.
arXiv Detail & Related papers (2024-03-26T06:57:23Z) - Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic [51.967603572656266]
We introduce a consistent and theoretically grounded approach to annotating decompositional entailment.
We find that our new dataset, RDTE, has a substantially higher internal consistency (+9%) than prior decompositional entailment datasets.
We also find that training an RDTE-oriented entailment classifier via knowledge distillation and employing it in an entailment tree reasoning engine significantly improves both accuracy and proof quality.
arXiv Detail & Related papers (2024-02-22T18:55:17Z) - CIF-based Collaborative Decoding for End-to-end Contextual Speech
Recognition [14.815422751109061]
We propose a continuous integrate-and-fire (CIF) based model that supports contextual biasing in a more controllable fashion.
An extra context processing network is introduced to extract contextual embeddings, integrate acoustically relevant context information and decode the contextual output distribution.
Our method brings relative character error rate (CER) reduction of 8.83%/21.13% and relative named entity character error rate (NE-CER) reduction of 40.14%/51.50% when compared with a strong baseline.
arXiv Detail & Related papers (2020-12-17T09:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.