MindBridge: Scalable and Cross-Model Knowledge Editing via Memory-Augmented Modality
- URL: http://arxiv.org/abs/2503.02701v1
- Date: Tue, 04 Mar 2025 15:17:57 GMT
- Title: MindBridge: Scalable and Cross-Model Knowledge Editing via Memory-Augmented Modality
- Authors: Shuaike Li, Kai Zhang, Qi Liu, Enhong Chen,
- Abstract summary: Most existing methods overfit to specific models, causing edited knowledge to be discarded during each update.<n>We introduce MindBridge, a scalable solution inspired by the low coupling between modality processing and LLMs in multi-modal models.<n>MindBridge achieves superior performance even in editing tens of thousands of knowledge entries and can flexibly adapt to different LLMs.
- Score: 55.01380617388064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge editing is a technique for efficiently and accurately updating the knowledge of large language models (LLMs) to alleviate obsolescence and correct errors. However, most existing methods overfit to specific models, causing edited knowledge to be discarded during each LLM update and requiring frequent re-editing, which is particularly burdensome in today's rapidly evolving open-source community. To address this issue, we propose the problem of cross-model knowledge editing and introduce MindBridge, a scalable solution inspired by the low coupling between modality processing and LLMs in multi-modal models. MindBridge introduces the novel concept of memory modality, which encodes edited knowledge as an independent modality. It first performs LLM-agnostic pre-training of the memory modality and then integrates it with various LLMs. Extensive experiments on multiple LLMs and popular knowledge editing datasets demonstrate that MindBridge achieves superior performance even in editing tens of thousands of knowledge entries and can flexibly adapt to different LLMs. Our code is available at https://github.com/CrashBugger/MindBridge.
Related papers
- Latent Knowledge Scalpel: Precise and Massive Knowledge Editing for Large Language Models [3.834827405473377]
Large Language Models (LLMs) often retain inaccurate or outdated information from pre-training, leading to incorrect predictions or biased outputs during inference.<n>We introduce the Latent Knowledge Scalpel (LKS), an LLM editor that manipulates the latent knowledge of specific entities via a lightweight hypernetwork to enable precise and large-scale editing.<n> Experiments conducted on Llama-2 and Mistral show even with the number of simultaneous edits reaching 10,000, LKS effectively performs knowledge editing while preserving the general abilities of the edited LLMs.
arXiv Detail & Related papers (2025-08-01T03:51:43Z) - Model Merging for Knowledge Editing [53.799891745131724]
Large Language Models (LLMs) require continuous updates to maintain accurate and current knowledge as the world evolves.<n>Existing knowledge editing approaches offer various solutions for knowledge updating, but they often struggle with sequential editing scenarios.<n>This paper proposes a two-stage framework combining robust supervised fine-tuning (R-SFT) with model merging for knowledge editing.
arXiv Detail & Related papers (2025-06-14T07:42:39Z) - Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning? [14.656572343761153]
editing and unlearning seem to be two distinct tasks, we find there is a tight connection between them.<n>We evaluate if knowledge editing techniques are strong baselines for LLM unlearning.<n>We propose practical recipes including self-improvement and query merging to better adapt editing methods for unlearning applications.
arXiv Detail & Related papers (2025-05-26T11:39:56Z) - AnyEdit: Edit Any Knowledge Encoded in Language Models [69.30638272162267]
We propose AnyEdit, a new autoregressive editing paradigm for large language models (LLMs)
It decomposes long-form knowledge into sequential chunks and iteratively edits the key token in each chunk, ensuring consistent and accurate outputs.
It outperforms strong baselines by 21.5% on benchmarks including UnKEBench, AKEW, and our new EditEverything dataset for long-form diverse-formatted knowledge.
arXiv Detail & Related papers (2025-02-08T16:18:37Z) - Mitigating Heterogeneous Token Overfitting in LLM Knowledge Editing [21.143790515287392]
Large language models (LLMs) have achieved remarkable performance on various natural language tasks.
They are trained on static corpora and their knowledge can become outdated quickly in the fast-changing world.
This motivates the development of knowledge editing (KE) to update specific knowledge in LLMs without changing unrelated others or compromising their pre-trained capabilities.
arXiv Detail & Related papers (2025-02-02T00:10:51Z) - Resolving Editing-Unlearning Conflicts: A Knowledge Codebook Framework for Large Language Model Updating [61.70705744491162]
Large Language Models (LLMs) excel in natural language processing by encoding extensive human knowledge.<n> Updating LLMs involves two key tasks simultaneously: unlearning to remove unwanted knowledge and editing to incorporate new information.<n>We propose LOKA, a conflict-free framework for LLM updating based on a knowledge codebook.
arXiv Detail & Related papers (2025-01-31T20:48:46Z) - ConKE: Conceptualization-Augmented Knowledge Editing in Large Language Models for Commonsense Reasoning [47.98788315789392]
ConceptEdit is a framework that integrates conceptualization and instantiation into the Knowledge Editing pipeline.<n>We show that ConceptEdit successfully generates commonsense knowledge with improved plausibility compared to other baselines.
arXiv Detail & Related papers (2024-12-16T03:34:40Z) - WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models [78.22291694903659]
Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses.<n>Where the updated knowledge resides in memories is a fundamental question for model editing.<n>We propose WISE to bridge the gap between memories.
arXiv Detail & Related papers (2024-05-23T16:35:52Z) - Editing Conceptual Knowledge for Large Language Models [65.38231526537476]
This paper pioneers the investigation of editing conceptual knowledge for Large Language Models (LLMs)
We construct a novel benchmark dataset ConceptEdit and establish a suite of new metrics for evaluation.
experimental results reveal that, although existing editing methods can efficiently modify concept-level definition to some extent, they also have the potential to distort the related instantial knowledge.
arXiv Detail & Related papers (2024-03-10T16:57:10Z) - Learning to Edit: Aligning LLMs with Knowledge Editing [101.96620267293731]
We propose a Learning to Edit (LTE) framework, focusing on teaching large language models to apply updated knowledge into input questions.
LTE features a two-phase process: (i) the Alignment Phase, which fine-tunes LLMs on a meticulously curated parallel dataset to make reliable, in-scope edits.
We demonstrate LTE's superiority in knowledge editing performance, robustness in both batch and sequential editing, minimal interference on general tasks, and rapid editing speeds.
arXiv Detail & Related papers (2024-02-19T07:45:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.