Aging with GRACE: Lifelong Model Editing with Discrete Key-Value
Adaptors
- URL: http://arxiv.org/abs/2211.11031v5
- Date: Wed, 18 Oct 2023 01:05:05 GMT
- Title: Aging with GRACE: Lifelong Model Editing with Discrete Key-Value
Adaptors
- Authors: Thomas Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim,
Marzyeh Ghassemi
- Abstract summary: We propose GRACE, a lifelong model editing method, which implements spot-fixes on streaming errors of a deployed model.
GRACE writes new mappings into a pre-trained model's latent space, creating a discrete, local codebook of edits without altering model weights.
Our experiments on T5, BERT, and GPT models show GRACE's state-of-the-art performance in making and retaining edits, while generalizing to unseen inputs.
- Score: 53.819805242367345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deployed language models decay over time due to shifting inputs, changing
user needs, or emergent world-knowledge gaps. When such problems are
identified, we want to make targeted edits while avoiding expensive retraining.
However, current model editors, which modify such behaviors of pre-trained
models, degrade model performance quickly across multiple, sequential edits. We
propose GRACE, a lifelong model editing method, which implements spot-fixes on
streaming errors of a deployed model, ensuring minimal impact on unrelated
inputs. GRACE writes new mappings into a pre-trained model's latent space,
creating a discrete, local codebook of edits without altering model weights.
This is the first method enabling thousands of sequential edits using only
streaming errors. Our experiments on T5, BERT, and GPT models show GRACE's
state-of-the-art performance in making and retaining edits, while generalizing
to unseen inputs. Our code is available at
https://www.github.com/thartvigsen/grace}.
Related papers
- Neuron-Level Sequential Editing for Large Language Models [19.324852774144752]
We introduce textbfNeuron-level textbfSequential textbfEditing (NSE) for supporting sequential model editing.
Specifically, we optimize the target layer's hidden states using the model's original weights to prevent model failure.
Our experiments demonstrate that NSE significantly outperforms current modifying parameters model editing methods.
arXiv Detail & Related papers (2024-10-05T05:52:22Z) - Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization [48.07144492109635]
Large language models need to be updated regularly.
Model editing is challenging as it might also affect knowledge that is unrelated to the new data.
We propose SAUL, a streamlined model editing method that uses sentence concatenation with augmented random facts for generation regularization.
arXiv Detail & Related papers (2024-10-03T12:28:13Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - Transformer-Patcher: One Mistake worth One Neuron [40.04159325505842]
In the deployment of AI services, there are ever-emerging mistakes, and the same mistake may recur if not corrected in time.
We introduce Transformer-Patcher, a novel model editor that can shift the behavior of transformer-based models by simply adding and training a few neurons.
Our method outperforms previous fine-tuning and HyperNetwork-based methods and achieves state-of-the-art performance for Sequential Model Editing (SME)
arXiv Detail & Related papers (2023-01-24T02:12:42Z) - Does Localization Inform Editing? Surprising Differences in
Causality-Based Localization vs. Knowledge Editing in Language Models [68.03946716358335]
We find that we can change how a fact is stored in a model by editing weights that are in a different location than where existing methods suggest that the fact is stored.
This is surprising because we would expect that localizing facts to specific model parameters would tell us where to manipulate knowledge in models.
Our results suggest, counterintuitively, that better mechanistic understanding of how pretrained language models work may not always translate to insights about how to best change their behavior.
arXiv Detail & Related papers (2023-01-10T21:26:08Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z) - Fast Model Editing at Scale [77.69220974621425]
We propose Model Editor Networks with Gradient Decomposition (MEND)
MEND is a collection of small auxiliary editing networks that use a single desired input-output pair to make fast, local edits to a pre-trained model.
MEND can be trained on a single GPU in less than a day even for 10 billion+ parameter models.
arXiv Detail & Related papers (2021-10-21T17:41:56Z) - A Structural Model for Contextual Code Changes [20.185486717922615]
Given a code snippet that is partially edited, our goal is to predict a completion of the edit for the rest of the snippet.
Our model achieves a 28% relative gain over state-of-the-art sequential models and 2x higher accuracy than syntactic models that learn to generate the edited code.
arXiv Detail & Related papers (2020-05-27T07:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.