Learning to Model Editing Processes
- URL: http://arxiv.org/abs/2205.12374v1
- Date: Tue, 24 May 2022 21:32:52 GMT
- Title: Learning to Model Editing Processes
- Authors: Machel Reid and Graham Neubig
- Abstract summary: We propose modeling editing processes, modeling the whole process of iteratively generating sequences.
We form a conceptual framework to describe the likelihood of multi-step edits, and describe neural models that can learn a generative model of sequences based on these multistep edits.
- Score: 98.11448946134894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing sequence generation models produce outputs in one pass, usually
left-to-right. However, this is in contrast with a more natural approach that
humans use in generating content; iterative refinement and editing. Recent work
has introduced edit-based models for various tasks (such as neural machine
translation and text style transfer), but these generally model a single edit
step. In this work, we propose modeling editing processes, modeling the whole
process of iteratively generating sequences. We form a conceptual framework to
describe the likelihood of multi-step edits, and describe neural models that
can learn a generative model of sequences based on these multistep edits. We
introduce baseline results and metrics on this task, finding that modeling
editing processes improves performance on a variety of axes on both our
proposed task and related downstream tasks compared to previous single-step
models of edits.
Related papers
- Neuron-Level Sequential Editing for Large Language Models [19.324852774144752]
We introduce textbfNeuron-level textbfSequential textbfEditing (NSE) for supporting sequential model editing.
Specifically, we optimize the target layer's hidden states using the model's original weights to prevent model failure.
Our experiments demonstrate that NSE significantly outperforms current modifying parameters model editing methods.
arXiv Detail & Related papers (2024-10-05T05:52:22Z) - Consecutive Batch Model Editing with HooK Layers [59.673084839708224]
CoachHooK is a model editing method that simultaneously supports sequential and batch editing.
It is memory-friendly as it only needs a small amount of it to store several hook layers whose size remains unchanged over time.
arXiv Detail & Related papers (2024-03-08T14:07:44Z) - Model Editing at Scale leads to Gradual and Catastrophic Forgetting [2.569159339315845]
We evaluate the current model editing methods at scale, focusing on two state of the art methods: ROME and MEMIT.
We find that as the model is edited sequentially with multiple facts, it continually forgets previously edited facts and the ability to perform downstream tasks.
arXiv Detail & Related papers (2024-01-15T03:57:15Z) - Edit at your own risk: evaluating the robustness of edited models to
distribution shifts [0.0]
We investigate how model editing affects the general robustness of a model, as well as the robustness of the specific behavior targeted by the edit.
We find that edits tend to reduce general robustness, but that the degree of degradation depends on the editing algorithm and layers chosen.
Motivated by these observations we introduce a new model editing algorithm, 1-layer (1-LI), which uses weight-space to navigate the trade-off between editing task accuracy and general robustness.
arXiv Detail & Related papers (2023-02-28T19:41:37Z) - DiffusER: Discrete Diffusion via Edit-based Reconstruction [88.62707047517914]
DiffusER is an edit-based generative model for text based on denoising diffusion models.
It can rival autoregressive models on several tasks spanning machine translation, summarization, and style transfer.
It can also perform other varieties of generation that standard autoregressive models are not well-suited for.
arXiv Detail & Related papers (2022-10-30T16:55:23Z) - Text Generation with Text-Editing Models [78.03750739936956]
This tutorial provides a comprehensive overview of text-editing models and current state-of-the-art approaches.
We discuss challenges related to productionization and how these models can be used to mitigate hallucination and bias.
arXiv Detail & Related papers (2022-06-14T17:58:17Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z) - Fast Model Editing at Scale [77.69220974621425]
We propose Model Editor Networks with Gradient Decomposition (MEND)
MEND is a collection of small auxiliary editing networks that use a single desired input-output pair to make fast, local edits to a pre-trained model.
MEND can be trained on a single GPU in less than a day even for 10 billion+ parameter models.
arXiv Detail & Related papers (2021-10-21T17:41:56Z) - A Structural Model for Contextual Code Changes [20.185486717922615]
Given a code snippet that is partially edited, our goal is to predict a completion of the edit for the rest of the snippet.
Our model achieves a 28% relative gain over state-of-the-art sequential models and 2x higher accuracy than syntactic models that learn to generate the edited code.
arXiv Detail & Related papers (2020-05-27T07:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.