Rethinking the Residual Distribution of Locate-then-Editing Methods in Model Editing
- URL: http://arxiv.org/abs/2502.03748v1
- Date: Thu, 06 Feb 2025 03:20:17 GMT
- Title: Rethinking the Residual Distribution of Locate-then-Editing Methods in Model Editing
- Authors: Xiaopeng Li, Shanwen Wang, Shasha Li, Shezheng Song, Bin Ji, Jun Ma, Jie Yu,
- Abstract summary: We show that locate-then-edit methods degrade original knowledge of Large Language Models (LLMs)
We propose the Boundary Layer UpdatE (Blue) strategy to enhance locate-then-edit methods.
- Score: 11.291085182725597
- License:
- Abstract: Model editing is a powerful technique for updating the knowledge of Large Language Models (LLMs). Locate-then-edit methods are a popular class of approaches that first identify the critical layers storing knowledge, then compute the residual of the last critical layer based on the edited knowledge, and finally perform multi-layer updates using a least-squares solution by evenly distributing the residual from the first critical layer to the last. Although these methods achieve promising results, they have been shown to degrade the original knowledge of LLMs. We argue that residual distribution leads to this issue. To explore this, we conduct a comprehensive analysis of residual distribution in locate-then-edit methods from both empirical and theoretical perspectives, revealing that residual distribution introduces editing errors, leading to inaccurate edits. To address this issue, we propose the Boundary Layer UpdatE (BLUE) strategy to enhance locate-then-edit methods. Sequential batch editing experiments on three LLMs and two datasets demonstrate that BLUE not only delivers an average performance improvement of 35.59\%, significantly advancing the state of the art in model editing, but also enhances the preservation of LLMs' general capabilities. Our code is available at https://github.com/xpq-tech/BLUE.
Related papers
- The Mirage of Model Editing: Revisiting Evaluation in the Wild [70.17413507444704]
We study the effectiveness of model editing in question answering applications.
Our single editing experiments indicate that current editing methods perform substantially worse than previously reported.
Our analysis provides a fundamental reexamination of both the real-world applicability of existing model editing methods and their evaluation practices.
arXiv Detail & Related papers (2025-02-16T15:57:55Z) - AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models [65.93240009586351]
Large language models (LLMs) often exhibit hallucinations due to incorrect or outdated knowledge.
We introduce AlphaEdit, a novel solution that projects perturbation onto the null space of the preserved knowledge before applying it to the parameters.
We theoretically prove that this projection ensures the output of post-edited LLMs remains unchanged when queried about the preserved knowledge.
arXiv Detail & Related papers (2024-10-03T10:06:27Z) - ELDER: Enhancing Lifelong Model Editing with Mixture-of-LoRA [55.697627106315004]
Large language models (LLMs) require model editing to efficiently update specific knowledge within them and avoid factual errors.
Previous approaches manage sequential edits by freezing original parameters and discretely allocating new parameters for each knowledge update.
We propose ELDER, a novel approach to create a continuous association between data and adapters.
arXiv Detail & Related papers (2024-08-19T02:27:00Z) - Perturbation-Restrained Sequential Model Editing [33.51709226068619]
Current model editing methods compromise the general abilities of large language models (LLMs) as the number of edits increases.
We propose a framework termed Perturbation Restraint on Upper bouNd for Editing (PRUNE)
PRUNE can preserve considerable general abilities while maintaining the editing performance effectively in sequential model editing.
arXiv Detail & Related papers (2024-05-27T04:40:56Z) - Editing Conceptual Knowledge for Large Language Models [65.38231526537476]
This paper pioneers the investigation of editing conceptual knowledge for Large Language Models (LLMs)
We construct a novel benchmark dataset ConceptEdit and establish a suite of new metrics for evaluation.
experimental results reveal that, although existing editing methods can efficiently modify concept-level definition to some extent, they also have the potential to distort the related instantial knowledge.
arXiv Detail & Related papers (2024-03-10T16:57:10Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - Knowledge Editing on Black-box Large Language Models [37.17131278142237]
Knowledge editing aims to efficiently and precisely modify the behavior of large language models (LLMs) to update specific knowledge.
Current research primarily focuses on white-box LLMs editing, overlooking an important scenario: black-box LLMs editing.
We introduce KE on black-box LLMs and then propose a comprehensive evaluation framework to overcome the limitations of existing evaluations.
Experiments and analysis on two benchmarks demonstrate that postEdit outperforms all baselines and achieves strong generalization.
arXiv Detail & Related papers (2024-02-13T17:59:34Z) - Emptying the Ocean with a Spoon: Should We Edit Models? [8.545919917068273]
We call into question the recently popularized method of direct model editing as a means of correcting factual errors in LLM generations.
We contrast model editing with three similar but distinct approaches that pursue better defined objectives.
arXiv Detail & Related papers (2023-10-18T13:38:03Z) - Editing Large Language Models: Problems, Methods, and Opportunities [51.903537096207]
This paper embarks on a deep exploration of the problems, methods, and opportunities related to model editing for LLMs.
We provide an exhaustive overview of the task definition and challenges associated with model editing, along with an in-depth empirical analysis of the most progressive methods currently at our disposal.
Our objective is to provide valuable insights into the effectiveness and feasibility of each editing technique, thereby assisting the community in making informed decisions on the selection of the most appropriate method for a specific task or context.
arXiv Detail & Related papers (2023-05-22T16:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.