GrACE: Generation using Associated Code Edits
- URL: http://arxiv.org/abs/2305.14129v3
- Date: Wed, 20 Sep 2023 19:46:10 GMT
- Title: GrACE: Generation using Associated Code Edits
- Authors: Priyanshu Gupta, Avishree Khare, Yasharth Bajpai, Saikat Chakraborty,
Sumit Gulwani, Aditya Kanade, Arjun Radhakrishna, Gustavo Soares, Ashish
Tiwari
- Abstract summary: We endowing pre-trained large language models (LLMs) of code with the knowledge of prior, relevant edits.
The generative capability of the LLMs helps address the diversity in code changes and conditioning code generation on prior edits.
We evaluate two well-known LLMs, Codex and CodeT5, in zero-shot and fine-tuning settings respectively.
- Score: 23.643567386291988
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developers expend a significant amount of time in editing code for a variety
of reasons such as bug fixing or adding new features. Designing effective
methods to predict code edits has been an active yet challenging area of
research due to the diversity of code edits and the difficulty of capturing the
developer intent. In this work, we address these challenges by endowing
pre-trained large language models (LLMs) of code with the knowledge of prior,
relevant edits. The generative capability of the LLMs helps address the
diversity in code changes and conditioning code generation on prior edits helps
capture the latent developer intent. We evaluate two well-known LLMs, Codex and
CodeT5, in zero-shot and fine-tuning settings respectively. In our experiments
with two datasets, the knowledge of prior edits boosts the performance of the
LLMs significantly and enables them to generate 29% and 54% more correctly
edited code in top-1 suggestions relative to the current state-of-the-art
symbolic and neural approaches, respectively.
Related papers
- Model Editing for LLMs4Code: How Far are We? [15.966127307546374]
Large Language Models for Code (LLMs4Code) have been found to exhibit outstanding performance in the software engineering domain.
However, even the most advanced LLMs4Code can inevitably contain incorrect or outdated code knowledge.
Model editing is a new technical field for effectively and efficiently correcting erroneous knowledge in LLMs.
arXiv Detail & Related papers (2024-11-11T00:18:54Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - VersiCode: Towards Version-controllable Code Generation [58.82709231906735]
Large Language Models (LLMs) have made tremendous strides in code generation, but existing research fails to account for the dynamic nature of software development.
We propose two novel tasks aimed at bridging this gap: version-specific code completion (VSCC) and version-aware code migration (VACM)
We conduct an extensive evaluation on VersiCode, which reveals that version-controllable code generation is indeed a significant challenge.
arXiv Detail & Related papers (2024-06-11T16:15:06Z) - AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data [64.69872638349922]
We present AlchemistCoder, a series of Code LLMs with enhanced code generation and generalization capabilities fine-tuned on multi-source data.
We propose incorporating the data construction process into the fine-tuning data as code comprehension tasks, including instruction evolution, data filtering, and code review.
arXiv Detail & Related papers (2024-05-29T16:57:33Z) - CodeEditorBench: Evaluating Code Editing Capability of Large Language Models [49.387195629660994]
Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability.
We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks.
We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks.
arXiv Detail & Related papers (2024-04-04T15:49:49Z) - Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions [6.367360745627828]
We introduce a benchmark of code editing tasks and use it to evaluate several cutting edge LLMs.
Our evaluation exposes a significant gap between the capabilities of state-of-the-art open and closed models.
We introduce a new, carefully curated, permissively licensed training dataset of code editing tasks coupled with natural language instructions.
arXiv Detail & Related papers (2023-12-11T02:27:45Z) - InstructCoder: Instruction Tuning Large Language Models for Code Editing [26.160498475809266]
We explore the use of Large Language Models (LLMs) to edit code based on user instructions.
InstructCoder is the first instruction-tuning dataset designed to adapt LLMs for general-purpose code editing.
Our findings reveal that open-source LLMs fine-tuned on InstructCoder can significantly enhance the accuracy of code edits.
arXiv Detail & Related papers (2023-10-31T10:15:35Z) - Coeditor: Leveraging Contextual Changes for Multi-round Code Auto-editing [57.776971051512234]
In this work, we explore a multi-round code auto-editing setting, aiming to predict edits to a code region based on recent changes within the same.
Our model, Coeditor, is a fine-tuned language model specifically designed for code editing tasks.
In a simplified single-round, single-edit task, Coeditor significantly outperforms GPT-3.5 and SOTA open-source code completion models.
arXiv Detail & Related papers (2023-05-29T19:57:36Z) - CodeT5+: Open Code Large Language Models for Code Understanding and
Generation [72.1638273937025]
Large language models (LLMs) pretrained on vast source code have achieved prominent progress in code intelligence.
CodeT5+ is a family of encoder-decoder LLMs for code in which component modules can be flexibly combined to suit a wide range of downstream code tasks.
We extensively evaluate CodeT5+ on over 20 code-related benchmarks in different settings, including zero-shot, finetuning, and instruction-tuning.
arXiv Detail & Related papers (2023-05-13T14:23:07Z) - Unsupervised Learning of General-Purpose Embeddings for Code Changes [6.652641137999891]
We propose an approach for obtaining embeddings of code changes during pre-training.
We evaluate them on two different downstream tasks - applying changes to code and commit message generation.
Our model outperforms the model that uses full edit sequences by 5.9 percentage points in accuracy.
arXiv Detail & Related papers (2021-06-03T19:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.