Learning Structural Edits via Incremental Tree Transformations
- URL: http://arxiv.org/abs/2101.12087v2
- Date: Fri, 5 Mar 2021 00:46:18 GMT
- Title: Learning Structural Edits via Incremental Tree Transformations
- Authors: Ziyu Yao, Frank F. Xu, Pengcheng Yin, Huan Sun, Graham Neubig
- Abstract summary: We present a generic model for incremental editing of structured data (i.e., "structural edits")
Our editor learns to iteratively generate tree edits (e.g., deleting or adding a subtree) and applies them to the partially edited data.
We evaluate our proposed editor on two source code edit datasets, where results show that, with the proposed edit encoder, our editor significantly improves accuracy over previous approaches.
- Score: 102.64394890816178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While most neural generative models generate outputs in a single pass, the
human creative process is usually one of iterative building and refinement.
Recent work has proposed models of editing processes, but these mostly focus on
editing sequential data and/or only model a single editing pass. In this paper,
we present a generic model for incremental editing of structured data (i.e.,
"structural edits"). Particularly, we focus on tree-structured data, taking
abstract syntax trees of computer programs as our canonical example. Our editor
learns to iteratively generate tree edits (e.g., deleting or adding a subtree)
and applies them to the partially edited data, thereby the entire editing
process can be formulated as consecutive, incremental tree transformations. To
show the unique benefits of modeling tree edits directly, we further propose a
novel edit encoder for learning to represent edits, as well as an imitation
learning method that allows the editor to be more robust. We evaluate our
proposed editor on two source code edit datasets, where results show that, with
the proposed edit encoder, our editor significantly improves accuracy over
previous approaches that generate the edited program directly in one pass.
Finally, we demonstrate that training our editor to imitate experts and correct
its mistakes dynamically can further improve its performance.
Related papers
- CoEdPilot: Recommending Code Edits with Learned Prior Edit Relevance, Project-wise Awareness, and Interactive Nature [15.209899925736751]
We propose CoEdPilot, an LLM-driven solution to recommend code edits by discriminating the relevant edits.
CoEdPilot orchestrates multiple neural transformers to identify what and how to edit in the project regarding both edit location and edit content.
Our experiments show that CoEdPilot can well predict the edits (i.e., predicting edit location with an accuracy of 70.8%-85.3%, and the edit content with an exact match rate of 41.8% and BLEU4 score of 60.7).
arXiv Detail & Related papers (2024-08-03T10:23:05Z) - Rebuilding ROME : Resolving Model Collapse during Sequential Model Editing [2.569159339315845]
We show that disabling edits are an artifact of irregularities in the implementation of Rank-One Model Editing (ROME)
We provide a more stable implementation ROME, which we call r-ROME, and show that model collapse is no longer observed when making large scale sequential edits with r-ROME.
arXiv Detail & Related papers (2024-03-11T21:33:05Z) - InstructEdit: Instruction-based Knowledge Editing for Large Language Models [39.2147118489123]
We develop an instruction-based editing technique, termed InstructEdit, which facilitates the editor's adaptation to various task performances simultaneously using simple instructions.
Experiments involving holdout unseen task illustrate that InstructEdit consistently surpass previous strong baselines.
arXiv Detail & Related papers (2024-02-25T15:46:33Z) - Object-aware Inversion and Reassembly for Image Editing [61.19822563737121]
We propose Object-aware Inversion and Reassembly (OIR) to enable object-level fine-grained editing.
We use our search metric to find the optimal inversion step for each editing pair when editing an image.
Our method achieves superior performance in editing object shapes, colors, materials, categories, etc., especially in multi-object editing scenarios.
arXiv Detail & Related papers (2023-10-18T17:59:02Z) - Memory-Based Model Editing at Scale [102.28475739907498]
Existing model editors struggle to accurately model an edit's intended scope.
We propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC)
SERAC stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed.
arXiv Detail & Related papers (2022-06-13T23:40:34Z) - Learning to Model Editing Processes [98.11448946134894]
We propose modeling editing processes, modeling the whole process of iteratively generating sequences.
We form a conceptual framework to describe the likelihood of multi-step edits, and describe neural models that can learn a generative model of sequences based on these multistep edits.
arXiv Detail & Related papers (2022-05-24T21:32:52Z) - Understanding Iterative Revision from Human-Written Text [10.714872525208385]
IteraTeR is the first large-scale, multi-domain, edit-intention annotated corpus of iteratively revised text.
We better understand the text revision process, making vital connections between edit intentions and writing quality.
arXiv Detail & Related papers (2022-03-08T01:47:42Z) - EditGAN: High-Precision Semantic Image Editing [120.49401527771067]
EditGAN is a novel method for high quality, high precision semantic image editing.
We show that EditGAN can manipulate images with an unprecedented level of detail and freedom.
We can also easily combine multiple edits and perform plausible edits beyond EditGAN training data.
arXiv Detail & Related papers (2021-11-04T22:36:33Z) - Text Editing by Command [82.50904226312451]
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.
We address this limitation with an interactive text generation setting in which the user interacts with the system by issuing commands to edit existing text.
We show that our Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations.
arXiv Detail & Related papers (2020-10-24T08:00:30Z) - A Structural Model for Contextual Code Changes [20.185486717922615]
Given a code snippet that is partially edited, our goal is to predict a completion of the edit for the rest of the snippet.
Our model achieves a 28% relative gain over state-of-the-art sequential models and 2x higher accuracy than syntactic models that learn to generate the edited code.
arXiv Detail & Related papers (2020-05-27T07:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.