DocMEdit: Towards Document-Level Model Editing
- URL: http://arxiv.org/abs/2505.19572v1
- Date: Mon, 26 May 2025 06:37:24 GMT
- Title: DocMEdit: Towards Document-Level Model Editing
- Authors: Li Zeng, Zeming Liu, Chong Feng, Heyan Huang, Yuhang Guo,
- Abstract summary: We introduce benchmarkname, a dataset focused on document-level model editing.<n>Results show that the difficulties in document-level model editing pose challenges for existing model editing methods.
- Score: 38.97953188421146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model editing aims to correct errors and outdated knowledge in the Large language models (LLMs) with minimal cost. Prior research has proposed a variety of datasets to assess the effectiveness of these model editing methods. However, most existing datasets only require models to output short phrases or sentences, overlooks the widespread existence of document-level tasks in the real world, raising doubts about their practical usability. Aimed at addressing this limitation and promoting the application of model editing in real-world scenarios, we propose the task of document-level model editing. To tackle such challenges and enhance model capabilities in practical settings, we introduce \benchmarkname, a dataset focused on document-level model editing, characterized by document-level inputs and outputs, extrapolative, and multiple facts within a single edit. We propose a series of evaluation metrics and experiments. The results show that the difficulties in document-level model editing pose challenges for existing model editing methods.
Related papers
- The Mirage of Model Editing: Revisiting Evaluation in the Wild [70.17413507444704]
We study the effectiveness of model editing in question answering applications.<n>Our single editing experiments indicate that current editing methods perform substantially worse than previously reported.<n>Our analysis provides a fundamental reexamination of both the real-world applicability of existing model editing methods and their evaluation practices.
arXiv Detail & Related papers (2025-02-16T15:57:55Z) - Reasons and Solutions for the Decline in Model Performance after Editing [17.756172082400163]
This paper explores the reasons for the performance decline of the edited model and optimize the editing method.
The performance of the editing model is mainly affected by the diversity of editing targets and sequence length.
In order to improve the performance of the editing model, this paper proposes a Dump for Sequence (D4S) method.
arXiv Detail & Related papers (2024-10-31T11:49:44Z) - FAME: Towards Factual Multi-Task Model Editing [4.858226284963096]
Large language models (LLMs) embed extensive knowledge and utilize it to perform exceptionally well across various tasks.
We present FAME, an factual, comprehensive, and multi-task dataset, which is designed to enhance the practicality of model editing.
We then propose SKEME, a model editing method that uses a novel caching mechanism to ensure synchronization with the real world.
arXiv Detail & Related papers (2024-10-07T13:46:06Z) - Neuron-Level Sequential Editing for Large Language Models [19.324852774144752]
We introduce textbfNeuron-level textbfSequential textbfEditing (NSE) for supporting sequential model editing.
Specifically, we optimize the target layer's hidden states using the model's original weights to prevent model failure.
Our experiments demonstrate that NSE significantly outperforms current modifying parameters model editing methods.
arXiv Detail & Related papers (2024-10-05T05:52:22Z) - Fundamental Problems With Model Editing: How Should Rational Belief Revision Work in LLMs? [61.68363765350178]
This paper critiques the standard formulation of the model editing problem and proposes a formal testbed for model editing research.
We first describe 12 open problems with model editing, based on challenges with (1) defining the problem, (2) developing benchmarks, and (3) assuming LLMs have editable beliefs in the first place.
Next, we introduce a semi-synthetic dataset for model editing based on Wikidata, where we can evaluate edits against labels given by an idealized Bayesian agent.
arXiv Detail & Related papers (2024-06-27T17:33:03Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue [122.20016030723043]
We evaluate the side effects of model editing on large language models (LLMs)
Our analysis reveals that the side effects are caused by model editing altering the original model weights excessively.
To mitigate this, a method named RECT is proposed to regularize the edit update weights.
arXiv Detail & Related papers (2024-01-09T18:03:15Z) - DUnE: Dataset for Unified Editing [3.7346004746366384]
We introduce DUnE-an editing benchmark where edits are natural language sentences.
We show that retrieval-augmented language modeling can outperform specialized editing techniques.
arXiv Detail & Related papers (2023-11-27T18:56:14Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.