A Structural Model for Contextual Code Changes
- URL: http://arxiv.org/abs/2005.13209v2
- Date: Mon, 12 Oct 2020 17:52:10 GMT
- Title: A Structural Model for Contextual Code Changes
- Authors: Shaked Brody, Uri Alon and Eran Yahav
- Abstract summary: Given a code snippet that is partially edited, our goal is to predict a completion of the edit for the rest of the snippet.
Our model achieves a 28% relative gain over state-of-the-art sequential models and 2x higher accuracy than syntactic models that learn to generate the edited code.
- Score: 20.185486717922615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of predicting edit completions based on a learned
model that was trained on past edits. Given a code snippet that is partially
edited, our goal is to predict a completion of the edit for the rest of the
snippet. We refer to this task as the EditCompletion task and present a novel
approach for tackling it. The main idea is to directly represent structural
edits. This allows us to model the likelihood of the edit itself, rather than
learning the likelihood of the edited code. We represent an edit operation as a
path in the program's Abstract Syntax Tree (AST), originating from the source
of the edit to the target of the edit. Using this representation, we present a
powerful and lightweight neural model for the EditCompletion task.
We conduct a thorough evaluation, comparing our approach to a variety of
representation and modeling approaches that are driven by multiple strong
models such as LSTMs, Transformers, and neural CRFs. Our experiments show that
our model achieves a 28% relative gain over state-of-the-art sequential models
and 2x higher accuracy than syntactic models that learn to generate the edited
code, as opposed to modeling the edits directly.
Our code, dataset, and trained models are publicly available at
https://github.com/tech-srl/c3po/ .
Related papers
- Neuron-Level Sequential Editing for Large Language Models [19.324852774144752]
We introduce textbfNeuron-level textbfSequential textbfEditing (NSE) for supporting sequential model editing.
Specifically, we optimize the target layer's hidden states using the model's original weights to prevent model failure.
Our experiments demonstrate that NSE significantly outperforms current modifying parameters model editing methods.
arXiv Detail & Related papers (2024-10-05T05:52:22Z) - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse [58.0132400208411]
Even a single edit can trigger model collapse, manifesting as significant performance degradation in various benchmark tasks.
benchmarking Large Language Models after each edit is impractically time-consuming and resource-intensive.
We have utilized GPT-3.5 to develop a new dataset, HardEdit, based on hard cases.
arXiv Detail & Related papers (2024-02-15T01:50:38Z) - Model Editing at Scale leads to Gradual and Catastrophic Forgetting [2.569159339315845]
We evaluate the current model editing methods at scale, focusing on two state of the art methods: ROME and MEMIT.
We find that as the model is edited sequentially with multiple facts, it continually forgets previously edited facts and the ability to perform downstream tasks.
arXiv Detail & Related papers (2024-01-15T03:57:15Z) - Edit at your own risk: evaluating the robustness of edited models to
distribution shifts [0.0]
We investigate how model editing affects the general robustness of a model, as well as the robustness of the specific behavior targeted by the edit.
We find that edits tend to reduce general robustness, but that the degree of degradation depends on the editing algorithm and layers chosen.
Motivated by these observations we introduce a new model editing algorithm, 1-layer (1-LI), which uses weight-space to navigate the trade-off between editing task accuracy and general robustness.
arXiv Detail & Related papers (2023-02-28T19:41:37Z) - Aging with GRACE: Lifelong Model Editing with Discrete Key-Value
Adaptors [53.819805242367345]
We propose GRACE, a lifelong model editing method, which implements spot-fixes on streaming errors of a deployed model.
GRACE writes new mappings into a pre-trained model's latent space, creating a discrete, local codebook of edits without altering model weights.
Our experiments on T5, BERT, and GPT models show GRACE's state-of-the-art performance in making and retaining edits, while generalizing to unseen inputs.
arXiv Detail & Related papers (2022-11-20T17:18:22Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z) - Learning to Model Editing Processes [98.11448946134894]
We propose modeling editing processes, modeling the whole process of iteratively generating sequences.
We form a conceptual framework to describe the likelihood of multi-step edits, and describe neural models that can learn a generative model of sequences based on these multistep edits.
arXiv Detail & Related papers (2022-05-24T21:32:52Z) - Fast Model Editing at Scale [77.69220974621425]
We propose Model Editor Networks with Gradient Decomposition (MEND)
MEND is a collection of small auxiliary editing networks that use a single desired input-output pair to make fast, local edits to a pre-trained model.
MEND can be trained on a single GPU in less than a day even for 10 billion+ parameter models.
arXiv Detail & Related papers (2021-10-21T17:41:56Z) - Learning Structural Edits via Incremental Tree Transformations [102.64394890816178]
We present a generic model for incremental editing of structured data (i.e., "structural edits")
Our editor learns to iteratively generate tree edits (e.g., deleting or adding a subtree) and applies them to the partially edited data.
We evaluate our proposed editor on two source code edit datasets, where results show that, with the proposed edit encoder, our editor significantly improves accuracy over previous approaches.
arXiv Detail & Related papers (2021-01-28T16:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.