Fast Model Editing at Scale
- URL: http://arxiv.org/abs/2110.11309v1
- Date: Thu, 21 Oct 2021 17:41:56 GMT
- Title: Fast Model Editing at Scale
- Authors: Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn,
Christopher D. Manning
- Abstract summary: We propose Model Editor Networks with Gradient Decomposition (MEND)
MEND is a collection of small auxiliary editing networks that use a single desired input-output pair to make fast, local edits to a pre-trained model.
MEND can be trained on a single GPU in less than a day even for 10 billion+ parameter models.
- Score: 77.69220974621425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While large pre-trained models have enabled impressive results on a variety
of downstream tasks, the largest existing models still make errors, and even
accurate predictions may become outdated over time. Because detecting all such
failures at training time is impossible, enabling both developers and end users
of such models to correct inaccurate outputs while leaving the model otherwise
intact is desirable. However, the distributed, black-box nature of the
representations learned by large neural networks makes producing such targeted
edits difficult. If presented with only a single problematic input and new
desired output, fine-tuning approaches tend to overfit; other editing
algorithms are either computationally infeasible or simply ineffective when
applied to very large models. To enable easy post-hoc editing at scale, we
propose Model Editor Networks with Gradient Decomposition (MEND), a collection
of small auxiliary editing networks that use a single desired input-output pair
to make fast, local edits to a pre-trained model. MEND learns to transform the
gradient obtained by standard fine-tuning, using a low-rank decomposition of
the gradient to make the parameterization of this transformation tractable.
MEND can be trained on a single GPU in less than a day even for 10 billion+
parameter models; once trained MEND enables rapid application of new edits to
the pre-trained model. Our experiments with T5, GPT, BERT, and BART models show
that MEND is the only approach to model editing that produces effective edits
for models with tens of millions to over 10 billion parameters. Implementation
available at https://sites.google.com/view/mend-editing.
Related papers
- Neuron-Level Sequential Editing for Large Language Models [19.324852774144752]
We introduce textbfNeuron-level textbfSequential textbfEditing (NSE) for supporting sequential model editing.
Specifically, we optimize the target layer's hidden states using the model's original weights to prevent model failure.
Our experiments demonstrate that NSE significantly outperforms current modifying parameters model editing methods.
arXiv Detail & Related papers (2024-10-05T05:52:22Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - Transformer-Patcher: One Mistake worth One Neuron [40.04159325505842]
In the deployment of AI services, there are ever-emerging mistakes, and the same mistake may recur if not corrected in time.
We introduce Transformer-Patcher, a novel model editor that can shift the behavior of transformer-based models by simply adding and training a few neurons.
Our method outperforms previous fine-tuning and HyperNetwork-based methods and achieves state-of-the-art performance for Sequential Model Editing (SME)
arXiv Detail & Related papers (2023-01-24T02:12:42Z) - Aging with GRACE: Lifelong Model Editing with Discrete Key-Value
Adaptors [53.819805242367345]
We propose GRACE, a lifelong model editing method, which implements spot-fixes on streaming errors of a deployed model.
GRACE writes new mappings into a pre-trained model's latent space, creating a discrete, local codebook of edits without altering model weights.
Our experiments on T5, BERT, and GPT models show GRACE's state-of-the-art performance in making and retaining edits, while generalizing to unseen inputs.
arXiv Detail & Related papers (2022-11-20T17:18:22Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z) - Learning to Model Editing Processes [98.11448946134894]
We propose modeling editing processes, modeling the whole process of iteratively generating sequences.
We form a conceptual framework to describe the likelihood of multi-step edits, and describe neural models that can learn a generative model of sequences based on these multistep edits.
arXiv Detail & Related papers (2022-05-24T21:32:52Z) - MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided
Adaptation [68.30497162547768]
We propose MoEBERT, which uses a Mixture-of-Experts structure to increase model capacity and inference speed.
We validate the efficiency and effectiveness of MoEBERT on natural language understanding and question answering tasks.
arXiv Detail & Related papers (2022-04-15T23:19:37Z) - A Structural Model for Contextual Code Changes [20.185486717922615]
Given a code snippet that is partially edited, our goal is to predict a completion of the edit for the rest of the snippet.
Our model achieves a 28% relative gain over state-of-the-art sequential models and 2x higher accuracy than syntactic models that learn to generate the edited code.
arXiv Detail & Related papers (2020-05-27T07:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.