AnyEdit: Edit Any Knowledge Encoded in Language Models
- URL: http://arxiv.org/abs/2502.05628v2
- Date: Thu, 27 Mar 2025 03:21:36 GMT
- Title: AnyEdit: Edit Any Knowledge Encoded in Language Models
- Authors: Houcheng Jiang, Junfeng Fang, Ningyu Zhang, Guojun Ma, Mingyang Wan, Xiang Wang, Xiangnan He, Tat-seng Chua,
- Abstract summary: We propose AnyEdit, a new autoregressive editing paradigm for large language models (LLMs)<n>It decomposes long-form knowledge into sequential chunks and iteratively edits the key token in each chunk, ensuring consistent and accurate outputs.<n>It outperforms strong baselines by 21.5% on benchmarks including UnKEBench, AKEW, and our new EditEverything dataset for long-form diverse-formatted knowledge.
- Score: 69.30638272162267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) often produce incorrect or outdated information, necessitating efficient and precise knowledge updates. Current model editing methods, however, struggle with long-form knowledge in diverse formats, such as poetry, code snippets, and mathematical derivations. These limitations arise from their reliance on editing a single token's hidden state, a limitation we term "efficacy barrier". To solve this, we propose AnyEdit, a new autoregressive editing paradigm. It decomposes long-form knowledge into sequential chunks and iteratively edits the key token in each chunk, ensuring consistent and accurate outputs. Theoretically, we ground AnyEdit in the Chain Rule of Mutual Information, showing its ability to update any knowledge within LLMs. Empirically, it outperforms strong baselines by 21.5% on benchmarks including UnKEBench, AKEW, and our new EditEverything dataset for long-form diverse-formatted knowledge. Additionally, AnyEdit serves as a plug-and-play framework, enabling current editing methods to update knowledge with arbitrary length and format, significantly advancing the scope and practicality of LLM knowledge editing.
Related papers
- Understanding the Limits of Lifelong Knowledge Editing in LLMs [59.12302872055081]
We bridge research into lifelong knowledge editing to real-world edits at practically relevant scale.
We first introduce WikiBigEdit; a large-scale benchmark of real-world Wikidata edits.
In its first instance, it includes over 500K question-answer pairs for knowledge editing.
arXiv Detail & Related papers (2025-03-07T18:45:42Z) - MindBridge: Scalable and Cross-Model Knowledge Editing via Memory-Augmented Modality [55.01380617388064]
Most existing methods overfit to specific models, causing edited knowledge to be discarded during each update.
We introduce MindBridge, a scalable solution inspired by the low coupling between modality processing and LLMs in multi-modal models.
MindBridge achieves superior performance even in editing tens of thousands of knowledge entries and can flexibly adapt to different LLMs.
arXiv Detail & Related papers (2025-03-04T15:17:57Z) - Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning [29.20378857521518]
Large language models (LLMs) have achieved remarkable performance on various natural language tasks.
They are trained on static corpora and their knowledge can become outdated quickly in the fast-changing world.
Previous efforts often sought to update a small amount of parameters in some specific layer(s) of a LLM.
We propose BaFT to manage different types of knowledge in an adaptive way, thereby achieving a better editing-locality trade-off.
arXiv Detail & Related papers (2025-03-01T02:34:44Z) - K-Edit: Language Model Editing with Contextual Knowledge Awareness [71.73747181407323]
Knowledge-based model editing enables precise modifications to the weights of large language models.
We present K-Edit, an effective approach to generating contextually consistent knowledge edits.
arXiv Detail & Related papers (2025-02-15T01:35:13Z) - AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models [65.93240009586351]
Large language models (LLMs) often exhibit hallucinations due to incorrect or outdated knowledge.
We introduce AlphaEdit, a novel solution that projects perturbation onto the null space of the preserved knowledge before applying it to the parameters.
We theoretically prove that this projection ensures the output of post-edited LLMs remains unchanged when queried about the preserved knowledge.
arXiv Detail & Related papers (2024-10-03T10:06:27Z) - How Well Can Knowledge Edit Methods Edit Perplexing Knowledge? [18.022428746019582]
Large language models (LLMs) have demonstrated remarkable capabilities, but updating their knowledge post-training remains a critical challenge.<n>We introduce the concept of perplexingness'': the degree to which new knowledge conflicts with an LLM's learned conceptual hierarchies and categorical relationships.<n>Our analysis reveals that edits involving more abstract concepts (hypernyms) generally exhibit higher perplexingness and are more resistant to modification than their specific counterparts (hyponyms)
arXiv Detail & Related papers (2024-06-25T03:41:02Z) - Has this Fact been Edited? Detecting Knowledge Edits in Language Models [5.260519479124422]
Knowledge editing methods (KEs) can update language models' obsolete or inaccurate knowledge learned from pre-training.
Knowing whether a generated output is based on edited knowledge or first-hand knowledge from pre-training can increase users' trust in generative models.
We propose a novel task: detecting edited knowledge in language models.
arXiv Detail & Related papers (2024-05-04T22:02:24Z) - Learning to Edit: Aligning LLMs with Knowledge Editing [101.96620267293731]
We propose a Learning to Edit (LTE) framework, focusing on teaching large language models to apply updated knowledge into input questions.
LTE features a two-phase process: (i) the Alignment Phase, which fine-tunes LLMs on a meticulously curated parallel dataset to make reliable, in-scope edits.
We demonstrate LTE's superiority in knowledge editing performance, robustness in both batch and sequential editing, minimal interference on general tasks, and rapid editing speeds.
arXiv Detail & Related papers (2024-02-19T07:45:17Z) - DeepEdit: Knowledge Editing as Decoding with Constraints [118.78008395850888]
How to edit the knowledge in multi-step reasoning has become the major challenge in the knowledge editing (KE) of large language models (LLMs)
We propose a new KE framework: DEEPEDIT, which enhances LLMs's ability to generate coherent reasoning chains with new knowledge through depth-first search.
In addition to DEEPEDIT, we propose two new KE benchmarks: MQUAKE-2002 and MQUAKE-HARD, which provide more precise and challenging assessments of KE approaches.
arXiv Detail & Related papers (2024-01-19T03:48:27Z) - EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models [45.70959260613425]
We propose EasyEdit, an easy-to-use knowledge editing framework for Large Language Models.
It supports various cutting-edge knowledge editing approaches and can be readily applied to many well-known LLMs.
We report the knowledge editing results on LlaMA-2 with EasyEdit, demonstrating that knowledge editing surpasses traditional fine-tuning.
arXiv Detail & Related papers (2023-08-14T16:52:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.