Long-form evaluation of model editing
- URL: http://arxiv.org/abs/2402.09394v2
- Date: Fri, 29 Mar 2024 21:17:23 GMT
- Title: Long-form evaluation of model editing
- Authors: Domenic Rosati, Robie Gonzales, Jinkun Chen, Xuemin Yu, Melis Erkan, Yahya Kayani, Satya Deepika Chavatapalli, Frank Rudzicz, Hassan Sajjad,
- Abstract summary: We introduce long-form evaluation of model editing (LEME) a novel evaluation protocol that measures the efficacy and impact of model editing in long-form generative settings.
We benchmark a number of model editing techniques and present several findings including that, while some methods (ROME and MEMIT) perform well in making consistent edits within a limited scope, they suffer much more from factual drift than other methods.
- Score: 21.554925686287735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evaluations of model editing currently only use the `next few token' completions after a prompt. As a result, the impact of these methods on longer natural language generation is largely unknown. We introduce long-form evaluation of model editing (LEME) a novel evaluation protocol that measures the efficacy and impact of model editing in long-form generative settings. Our protocol consists of a machine-rated survey and a classifier which correlates well with human ratings. Importantly, we find that our protocol has very little relationship with previous short-form metrics (despite being designed to extend efficacy, generalization, locality, and portability into a long-form setting), indicating that our method introduces a novel set of dimensions for understanding model editing methods. Using this protocol, we benchmark a number of model editing techniques and present several findings including that, while some methods (ROME and MEMIT) perform well in making consistent edits within a limited scope, they suffer much more from factual drift than other methods. Finally, we present a qualitative analysis that illustrates common failure modes in long-form generative settings including internal consistency, lexical cohesion, and locality issues.
Related papers
- What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models [88.398085358514]
DICE is a model designed to detect localized differences between the original and the edited image.<n>It is trained using a strategy that leverages self-supervision, distillation from inpainting networks, and full supervision.<n>We demonstrate that DICE effectively identifies coherent edits, effectively evaluating images generated by different editing models with a strong correlation with human judgment.
arXiv Detail & Related papers (2025-05-26T18:00:10Z) - DocMEdit: Towards Document-Level Model Editing [38.97953188421146]
We introduce benchmarkname, a dataset focused on document-level model editing.<n>Results show that the difficulties in document-level model editing pose challenges for existing model editing methods.
arXiv Detail & Related papers (2025-05-26T06:37:24Z) - The Mirage of Model Editing: Revisiting Evaluation in the Wild [70.17413507444704]
We study the effectiveness of model editing in question answering applications.
Our single editing experiments indicate that current editing methods perform substantially worse than previously reported.
Our analysis provides a fundamental reexamination of both the real-world applicability of existing model editing methods and their evaluation practices.
arXiv Detail & Related papers (2025-02-16T15:57:55Z) - Should We Really Edit Language Models? On the Evaluation of Edited Language Models [15.63231238452797]
Existing editing methods lead to inevitable performance deterioration on general benchmarks.
Instruction-tuned models are more robust to editing, showing less performance drop on general knowledge after editing.
Our findings indicate that current editing methods are only suitable for small-scale knowledge updates within language models.
arXiv Detail & Related papers (2024-10-24T14:36:48Z) - Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-context Models [58.6172667880028]
We propose a new method called forgetting curve to measure the memorization capability of long-context models.
We show that forgetting curve has the advantage of being robust to the tested corpus and the experimental settings.
Our measurement provides empirical evidence for the effectiveness of transformer extension techniques while raises questions for the effective length of RNN/SSM based models.
arXiv Detail & Related papers (2024-10-07T03:38:27Z) - Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization [48.07144492109635]
Large language models need to be updated regularly.
Model editing is challenging as it might also affect knowledge that is unrelated to the new data.
We propose SAUL, a streamlined model editing method that uses sentence concatenation with augmented random facts for generation regularization.
arXiv Detail & Related papers (2024-10-03T12:28:13Z) - Consecutive Batch Model Editing with HooK Layers [59.673084839708224]
CoachHooK is a model editing method that simultaneously supports sequential and batch editing.
It is memory-friendly as it only needs a small amount of it to store several hook layers whose size remains unchanged over time.
arXiv Detail & Related papers (2024-03-08T14:07:44Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - Propagation and Pitfalls: Reasoning-based Assessment of Knowledge
Editing through Counterfactual Tasks [36.292901021210575]
We introduce a novel reasoning-based benchmark -- ReCoE (Reasoning-based Counterfactual Editing dataset)
We conduct a thorough analysis of existing knowledge editing techniques, including input augmentation, finetuning, and locate-and-edit.
All model editing methods show notably low performance on this dataset, especially in certain reasoning schemes.
arXiv Detail & Related papers (2024-01-31T04:12:59Z) - Model Editing at Scale leads to Gradual and Catastrophic Forgetting [2.569159339315845]
We evaluate the current model editing methods at scale, focusing on two state of the art methods: ROME and MEMIT.
We find that as the model is edited sequentially with multiple facts, it continually forgets previously edited facts and the ability to perform downstream tasks.
arXiv Detail & Related papers (2024-01-15T03:57:15Z) - Edit at your own risk: evaluating the robustness of edited models to
distribution shifts [0.0]
We investigate how model editing affects the general robustness of a model, as well as the robustness of the specific behavior targeted by the edit.
We find that edits tend to reduce general robustness, but that the degree of degradation depends on the editing algorithm and layers chosen.
Motivated by these observations we introduce a new model editing algorithm, 1-layer (1-LI), which uses weight-space to navigate the trade-off between editing task accuracy and general robustness.
arXiv Detail & Related papers (2023-02-28T19:41:37Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z) - Pros and Cons of GAN Evaluation Measures: New Developments [53.10151901863263]
This work is an update of a previous paper on the same topic published a few years ago.
I describe new dimensions that are becoming important in assessing models, and discuss the connection between GAN evaluation and deepfakes.
arXiv Detail & Related papers (2021-03-17T01:48:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.