Evaluating the Ripple Effects of Knowledge Editing in Language Models
- URL: http://arxiv.org/abs/2307.12976v2
- Date: Wed, 20 Dec 2023 11:52:41 GMT
- Title: Evaluating the Ripple Effects of Knowledge Editing in Language Models
- Authors: Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, Mor Geva
- Abstract summary: We present a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects.
We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge.
- Score: 47.6531309439867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern language models capture a large body of factual knowledge. However,
some facts can be incorrectly induced or become obsolete over time, resulting
in factually incorrect generations. This has led to the development of various
editing methods that allow updating facts encoded by the model. Evaluation of
these methods has primarily focused on testing whether an individual fact has
been successfully injected, and if similar predictions for other subjects have
not changed. Here we argue that such evaluation is limited, since injecting one
fact (e.g. ``Jack Depp is the son of Johnny Depp'') introduces a ``ripple
effect'' in the form of additional facts that the model needs to update
(e.g.``Jack Depp is the sibling of Lily-Rose Depp''). To address this issue, we
propose a novel set of evaluation criteria that consider the implications of an
edit on related facts. Using these criteria, we then construct RippleEdits, a
diagnostic benchmark of 5K factual edits, capturing a variety of types of
ripple effects. We evaluate prominent editing methods on RippleEdits, showing
that current methods fail to introduce consistent changes in the model's
knowledge. In addition, we find that a simple in-context editing baseline
obtains the best scores on our benchmark, suggesting a promising research
direction for model editing.
Related papers
- Resolving UnderEdit & OverEdit with Iterative & Neighbor-Assisted Model Editing [7.752740499342269]
Large Language Models (LLMs) are used in various downstream language tasks.
Both retraining and fine-tuning the model can be costly.
Model editing offers an efficient and effective alternative by a single update to only a key subset of model parameters.
We propose iterative model editing, based on our hypothesis that a single parameter update is often insufficient.
Our methods effectively reduce UnderEdit up to 38 percentage points and OverEdit up to 6 percentage points across multiple model editing algorithms, LLMs, and benchmark datasets.
arXiv Detail & Related papers (2025-03-14T21:53:12Z) - The Mirage of Model Editing: Revisiting Evaluation in the Wild [70.17413507444704]
We study the effectiveness of model editing in question answering applications.
Our single editing experiments indicate that current editing methods perform substantially worse than previously reported.
Our analysis provides a fundamental reexamination of both the real-world applicability of existing model editing methods and their evaluation practices.
arXiv Detail & Related papers (2025-02-16T15:57:55Z) - Should We Really Edit Language Models? On the Evaluation of Edited Language Models [15.63231238452797]
Existing editing methods lead to inevitable performance deterioration on general benchmarks.
Instruction-tuned models are more robust to editing, showing less performance drop on general knowledge after editing.
Our findings indicate that current editing methods are only suitable for small-scale knowledge updates within language models.
arXiv Detail & Related papers (2024-10-24T14:36:48Z) - Fundamental Problems With Model Editing: How Should Rational Belief Revision Work in LLMs? [61.68363765350178]
This paper critiques the standard formulation of the model editing problem and proposes a formal testbed for model editing research.
We first describe 12 open problems with model editing, based on challenges with (1) defining the problem, (2) developing benchmarks, and (3) assuming LLMs have editable beliefs in the first place.
Next, we introduce a semi-synthetic dataset for model editing based on Wikidata, where we can evaluate edits against labels given by an idealized Bayesian agent.
arXiv Detail & Related papers (2024-06-27T17:33:03Z) - Outdated Issue Aware Decoding for Reasoning Questions on Edited Knowledge [93.54427119091174]
We propose outDated ISsue aware deCOding to enhance the performance of edited models on reasoning questions.
We capture the difference in the probability distribution between the original and edited models.
We amplify the difference of the token prediction in the edited model to alleviate the outdated issue.
arXiv Detail & Related papers (2024-06-05T03:00:15Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - Propagation and Pitfalls: Reasoning-based Assessment of Knowledge
Editing through Counterfactual Tasks [36.292901021210575]
We introduce a novel reasoning-based benchmark -- ReCoE (Reasoning-based Counterfactual Editing dataset)
We conduct a thorough analysis of existing knowledge editing techniques, including input augmentation, finetuning, and locate-and-edit.
All model editing methods show notably low performance on this dataset, especially in certain reasoning schemes.
arXiv Detail & Related papers (2024-01-31T04:12:59Z) - Untying the Reversal Curse via Bidirectional Language Model Editing [41.040662400025184]
Large language models (LLMs) store massive factual knowledge within their parameters.
LLMs are prone to hallucinate unintended text due to false or outdated knowledge.
We study bidirectional language model editing to assess if edited LLMs can recall the editing knowledge bidirectionally.
arXiv Detail & Related papers (2023-10-16T12:04:13Z) - Does Localization Inform Editing? Surprising Differences in
Causality-Based Localization vs. Knowledge Editing in Language Models [68.03946716358335]
We find that we can change how a fact is stored in a model by editing weights that are in a different location than where existing methods suggest that the fact is stored.
This is surprising because we would expect that localizing facts to specific model parameters would tell us where to manipulate knowledge in models.
Our results suggest, counterintuitively, that better mechanistic understanding of how pretrained language models work may not always translate to insights about how to best change their behavior.
arXiv Detail & Related papers (2023-01-10T21:26:08Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.