Towards a Principled Evaluation of Knowledge Editors
- URL: http://arxiv.org/abs/2507.05937v1
- Date: Tue, 08 Jul 2025 12:37:54 GMT
- Title: Towards a Principled Evaluation of Knowledge Editors
- Authors: Sebastian Pohl, Max Ploner, Alan Akbik,
- Abstract summary: We show that choosing different metrics and evaluation methodologies as well as different edit batch sizes can lead to a different ranking of knowledge editors.<n>We also include a manual assessment of the string matching based evaluation method for knowledge editing that is favored by recently released datasets, revealing a tendency to produce false positive matches.
- Score: 2.497666465251894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model editing has been gaining increasing attention over the past few years. For Knowledge Editing in particular, more challenging evaluation datasets have recently been released. These datasets use different methodologies to score the success of editors. Yet, it remains under-explored how robust these methodologies are and whether they unfairly favor some editors. Moreover, the disruptive impact of these editors on overall model capabilities remains a constant blind spot. We address both of these problems and show that choosing different metrics and evaluation methodologies as well as different edit batch sizes can lead to a different ranking of knowledge editors. Crucially we demonstrate this effect also on general language understanding tasks evaluated alongside the knowledge editing tasks. Further we include a manual assessment of the string matching based evaluation method for knowledge editing that is favored by recently released datasets, revealing a tendency to produce false positive matches.
Related papers
- Retention analysis of edited knowledge after fine-tuning [5.440397659472036]
Large language models (LLMs) store vast amounts of knowledge, which often requires updates to correct factual errors, incorporate newly acquired information, or adapt model behavior.<n>Model editing methods have emerged as efficient solutions for such updates, offering localized and precise knowledge modification at significantly lower computational cost than continual training.<n>However, the effect of fine-tuning on previously edited knowledge remains poorly understood.
arXiv Detail & Related papers (2025-07-14T15:51:19Z) - The Mirage of Model Editing: Revisiting Evaluation in the Wild [70.17413507444704]
We introduce QAEdit, a new benchmark aligned with widely used question answering (QA) datasets, and WILD, a task-agnostic evaluation framework.<n>Our single editing experiments show that current editing methods perform substantially worse than previously reported.
arXiv Detail & Related papers (2025-02-16T15:57:55Z) - K-Edit: Language Model Editing with Contextual Knowledge Awareness [71.73747181407323]
Knowledge-based model editing enables precise modifications to the weights of large language models.<n>We present K-Edit, an effective approach to generating contextually consistent knowledge edits.
arXiv Detail & Related papers (2025-02-15T01:35:13Z) - Related Knowledge Perturbation Matters: Rethinking Multiple Pieces of Knowledge Editing in Same-Subject [49.559994791305535]
Current state-of-the-art editing methods struggle when tasked with editing multiple related knowledge pieces for the same subject.<n>We introduce the $textS2textRKE$(Same-Subject Related Knowledge Editing) benchmark.<n>Our experiments reveal that only mainstream locate-then-edit methods, such as ROME and MEMIT, exhibit "related knowledge perturbation"
arXiv Detail & Related papers (2025-02-08T04:47:17Z) - Uncovering Overfitting in Large Language Model Editing [35.55260822503773]
We identify and investigate the phenomenon of Editing Overfit, where edited models assign disproportionately high probabilities to the edit target.<n>We propose Learn the Inference (LTI), which introduce a Multi-stage Inference Constraint module to guide the edited models in recalling new knowledge.
arXiv Detail & Related papers (2024-10-10T11:09:00Z) - Editing the Mind of Giants: An In-Depth Exploration of Pitfalls of Knowledge Editing in Large Language Models [26.516571783335824]
Recent studies have identified side effects, such as knowledge distortion and the deterioration of general abilities, that have emerged after editing.
This survey presents a comprehensive study of these side effects, providing a unified perspective on the challenges of knowledge editing in large language models.
arXiv Detail & Related papers (2024-06-03T15:28:21Z) - EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries [69.72012539060731]
We introduce a theoretical framework for efficient knowledge editing (KE) in large language models (LLMs)
We propose a novel task of event-based knowledge editing that pairs facts with event descriptions.
We empirically demonstrate the superiority of event-based editing over the existing setting on resolving uncertainty in edited models.
arXiv Detail & Related papers (2024-02-17T16:34:50Z) - Propagation and Pitfalls: Reasoning-based Assessment of Knowledge
Editing through Counterfactual Tasks [36.292901021210575]
We introduce a novel reasoning-based benchmark -- ReCoE (Reasoning-based Counterfactual Editing dataset)
We conduct a thorough analysis of existing knowledge editing techniques, including input augmentation, finetuning, and locate-and-edit.
All model editing methods show notably low performance on this dataset, especially in certain reasoning schemes.
arXiv Detail & Related papers (2024-01-31T04:12:59Z) - DocTER: Evaluating Document-based Knowledge Editing [53.14000724633775]
We explore knowledge editing using easily accessible documents instead of manually labeled factual triples.<n>A comprehensive four-perspective evaluation is introduced: Edit Success, Locality, Reasoning, and Cross-lingual Transfer.<n>Experiments on popular knowledge editing methods demonstrate that editing with documents presents significantly greater challenges than using triples.
arXiv Detail & Related papers (2023-08-19T09:17:19Z) - EditEval: An Instruction-Based Benchmark for Text Improvements [73.5918084416016]
This work presents EditEval: An instruction-based, benchmark and evaluation suite for automatic evaluation of editing capabilities.
We evaluate several pre-trained models, which shows that InstructGPT and PEER perform the best, but that most baselines fall below the supervised SOTA.
Our analysis shows that commonly used metrics for editing tasks do not always correlate well, and that optimization for prompts with the highest performance does not necessarily entail the strongest robustness to different models.
arXiv Detail & Related papers (2022-09-27T12:26:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.