Representation Interventions Enable Lifelong Unstructured Knowledge Control
- URL: http://arxiv.org/abs/2511.20892v1
- Date: Tue, 25 Nov 2025 22:15:00 GMT
- Title: Representation Interventions Enable Lifelong Unstructured Knowledge Control
- Authors: Xuyuan Liu, Zhengzhang Chen, Xinshuai Dong, Yanchi Liu, Xujiang Zhao, Shengyu Chen, Haoyu Wang, Yujun Yan, Haifeng Chen,
- Abstract summary: Large language models (LLMs) often produce incorrect or outdated content. Updating their knowledge efficiently and accurately without costly retraining is a major challenge.<n>We introduce RILKE, a robust and scalable method that treats knowledge control as interventions within the model's representation space.<n>During training, RILKE learns paraphrase-robust and edit-localized modules that limit each update to a low-dimensional subspace to minimize cross-edit interference.<n>In inference, a query-adaptive router selects the appropriate module to guide the model's generation.
- Score: 54.86207134539453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) often produce incorrect or outdated content. Updating their knowledge efficiently and accurately without costly retraining is a major challenge. This problem is especially hard for complex, unstructured knowledge in a lifelong setting, where many edits must coexist without interference. We introduce RILKE (Representation Intervention for Lifelong KnowledgE Control), a robust and scalable method that treats knowledge control as interventions within the model's representation space. Leveraging representation-space expressiveness, we identify two properties enabling RILKE to deliver fine-grained control over complex, unstructured knowledge while maintaining general utility with frozen base weights. During training, RILKE learns paraphrase-robust and edit-localized modules that limit each update to a low-dimensional subspace to minimize cross-edit interference. In inference, a query-adaptive router selects the appropriate module to guide the model's generation. In evaluation on knowledge editing benchmarks with LLaMA and Qwen models, RILKE is scalable to large-scale datasets, demonstrating high edit success, strong paraphrase generalization, and preserving general utility with modest memory overhead. These results show RILKE is an effective and scalable solution for lifelong knowledge control in LLMs.
Related papers
- Consistency-Aware Editing for Entity-level Unlearning in Language Models [53.522931419965424]
We introduce a novel consistency-aware editing (CAE) framework for entity-level unlearning.<n>CAE aggregates a diverse set of prompts related to a target entity, including its attributes, relations, and adversarial paraphrases.<n>It then jointly learns a low-rank update guided by a consistency regularizer that aligns the editing directions across prompts.
arXiv Detail & Related papers (2025-12-19T15:18:07Z) - An Information-Theoretic Framework for Robust Large Language Model Editing [17.984683741974063]
Large Language Models (LLMs) have become indispensable tools in science, technology, and society.<n>Errors or outdated information within these models can undermine their accuracy and restrict their safe deployment.<n>We introduce a novel framework for editing LLMs, grounded in information bottleneck theory.<n>We present the Information Bottleneck Knowledge Editor (IBKE), which leverages compact latent representations to guide gradient-based updates.
arXiv Detail & Related papers (2025-12-18T06:21:17Z) - RECALL: REpresentation-aligned Catastrophic-forgetting ALLeviation via Hierarchical Model Merging [33.22889542330089]
Internal representations in large language models (LLMs) serve as reliable proxies of learned knowledge.<n>We propose RECALL, a representation-aware model merging framework for continual learning without access to historical data.
arXiv Detail & Related papers (2025-10-23T12:17:37Z) - MEMOIR: Lifelong Model Editing with Minimal Overwrite and Informed Retention for LLMs [76.28901550926021]
Existing methods for lifelong model editing compromise generalization, interfere with past edits, or fail to scale to long editing sequences.<n>We propose MEMOIR, a novel scalable framework that injects knowledge through a residual memory, while preserving the core capabilities of the pre-trained model.<n>MeMOIR achieves state-of-the-art performance across reliability, generalization, and locality metrics, scaling to thousands of sequential edits with minimal forgetting.
arXiv Detail & Related papers (2025-06-09T16:16:42Z) - Prompting is not Enough: Exploring Knowledge Integration and Controllable Generation [89.65955788873532]
Open-domain question answering (OpenQA) represents a cornerstone in natural language processing (NLP)<n>We propose a novel framework named GenKI, which aims to improve the OpenQA performance by exploring Knowledge Integration and controllable Generation.
arXiv Detail & Related papers (2025-05-26T08:18:33Z) - KBM: Delineating Knowledge Boundary for Adaptive Retrieval in Large Language Models [69.99274367773997]
Large Language Models (LLMs) often struggle with dynamically changing knowledge and handling unknown static information.<n>Retrieval-Augmented Generation (RAG) is employed to tackle these challenges and has a significant impact on improving LLM performance.<n>We propose a Knowledge Boundary Model (KBM) to express the known/unknown of a given question, and to determine whether a RAG needs to be triggered.
arXiv Detail & Related papers (2024-11-09T15:12:28Z) - ELDER: Enhancing Lifelong Model Editing with Mixture-of-LoRA [55.697627106315004]
Large language models (LLMs) require model editing to efficiently update specific knowledge within them and avoid factual errors.<n>Previous approaches manage sequential edits by freezing original parameters and discretely allocating new parameters for each knowledge update.<n>We propose ELDER, a novel approach to create a continuous association between data and adapters.
arXiv Detail & Related papers (2024-08-19T02:27:00Z) - Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning [30.554641380670315]
We introduce RECIPE, a ContInuous Prompt lEarning method to boost editing efficacy and inference efficiency in lifelong learning.<n> RECIPE first converts knowledge statements into short and informative continuous prompts, prefixed to the LLM's input query embedding.<n>It further integrates the Knowledge Sentinel (KS) that acts as an intermediary to calculate a dynamic threshold.<n>Our retriever and prompt encoder are jointly trained to achieve editing properties, i.e. reliability, generality, and locality.
arXiv Detail & Related papers (2024-05-06T08:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.