Less Training, More Repairing Please: Revisiting Automated Program Repair via Zero-shot Learning
- URL: http://arxiv.org/abs/2207.08281v3
- Date: Thu, 05 Dec 2024 00:53:15 GMT
- Title: Less Training, More Repairing Please: Revisiting Automated Program Repair via Zero-shot Learning
- Authors: Chunqiu Steven Xia, Lingming Zhang,
- Abstract summary: We propose AlphaRepair as a practical multilingual APR tool based on the recent CodeBERT model.
Our results on the widely used Defects4J benchmark show that AlphaRepair can substantially outperform state-of-the-art APR tools.
- Score: 13.632199062382746
- License:
- Abstract: Due to the promising future of Automated Program Repair (APR), researchers have proposed various APR techniques, including heuristic-based, template-based, and constraint-based techniques. Among such classic APR techniques, template-based techniques have been widely recognized as state of the art. However, such template-based techniques require predefined templates to perform repair, and their effectiveness is thus limited. To this end, researchers leveraged the recent advances in Deep Learning to further improve APR. Such learning-based techniques view APR as a Neural Machine Translation problem, using the buggy/fixed code snippets as the source/target languages for translation. In this way, such techniques heavily rely on large numbers of high-quality bug-fixing commits, which can be extremely costly and challenging to construct. Furthermore, the edit variety of these learning-based techniques are limited to the available bug-fixes within their training datasets. Therefore, in this paper, we aim to revisit the learning-based APR problem, and propose AlphaRepair, to leverage zero-shot learning directly using large pre-trained code models for APR. Our main insight is instead of modeling what a repair edit should look like, we can directly predict what the correct code is based on the context information. We have implemented AlphaRepair as a practical multilingual APR tool based on the recent CodeBERT model. Our results on the widely used Defects4J benchmark show that AlphaRepair can substantially outperform state-of-the-art APR tools. We also studied the impact of different design choices and show that AlphaRepair performs even better on a newer version of Defects4J (2.0) with 3.3X more fixes than best performing baseline, indicating that AlphaRepair can potentially avoid the dataset-overfitting issue of existing learning-based techniques.
Related papers
- Self-Improvement in Language Models: The Sharpening Mechanism [70.9248553790022]
We offer a new perspective on the capabilities of self-improvement through a lens we refer to as sharpening.
Motivated by the observation that language models are often better at verifying response quality than they are at generating correct responses, we formalize self-improvement as using the model itself as a verifier during post-training.
We analyze two natural families of self-improvement algorithms based on SFT and RLHF.
arXiv Detail & Related papers (2024-12-02T20:24:17Z) - Repairs in a Block World: A New Benchmark for Handling User Corrections with Multi-Modal Language Models [48.42142115255159]
We release BlockWorld-Repairs: a dataset of multi-modal TPR sequences in an instruction-following manipulation task.
We evaluate several state-of-the-art Vision and Language Models (VLM) across multiple settings, focusing on their capability to process and accurately respond to TPRs.
Our results suggest that these models are not yet ready to be deployed in multi-modal collaborative settings.
arXiv Detail & Related papers (2024-09-21T21:06:25Z) - NARRepair: Non-Autoregressive Code Generation Model for Automatic Program Repair [8.77021401961262]
Non-Autoregressive(NAR) method can output target code in a parallel manner to avoid huge inference delays.
We propose NARRepair, the first customized NAR code generation model for the APR task.
The NARRepair features three major novelties, including 1) using repair actions to alleviate the over-correction issue, 2) extracting dependency information from AST to alleviate the issue of lacking inter-word dependency information, and 3) employing two-stage decoding to alleviate the issue of lacking contextual information.
arXiv Detail & Related papers (2024-06-24T11:04:28Z) - How Far Can We Go with Practical Function-Level Program Repair? [11.71750828464698]
This paper investigates the effect of few-shot learning mechanism and the auxiliary repair-relevant information on function-level APR.
We propose an LLM-based function-level APR technique, namely SRepair, which adopts a dual-LLM framework to leverage the power of the auxiliary repair-relevant information.
arXiv Detail & Related papers (2024-04-19T12:14:09Z) - A Novel Approach for Automatic Program Repair using Round-Trip
Translation with Large Language Models [50.86686630756207]
Research shows that grammatical mistakes in a sentence can be corrected by translating it to another language and back.
Current generative models for Automatic Program Repair (APR) are pre-trained on source code and fine-tuned for repair.
This paper proposes bypassing the fine-tuning step and using Round-Trip Translation (RTT): translation of code from one programming language to another programming or natural language, and back.
arXiv Detail & Related papers (2024-01-15T22:36:31Z) - RAP-Gen: Retrieval-Augmented Patch Generation with CodeT5 for Automatic
Program Repair [75.40584530380589]
We propose a novel Retrieval-Augmented Patch Generation framework (RAP-Gen)
RAP-Gen explicitly leveraging relevant fix patterns retrieved from a list of previous bug-fix pairs.
We evaluate RAP-Gen on three benchmarks in two programming languages, including the TFix benchmark in JavaScript, and Code Refinement and Defects4J benchmarks in Java.
arXiv Detail & Related papers (2023-09-12T08:52:56Z) - The Wisdom of Hindsight Makes Language Models Better Instruction
Followers [84.9120606803906]
Reinforcement learning has seen wide success in finetuning large language models to better align with instructions via human feedback.
In this paper, we consider an alternative approach: converting feedback to instruction by relabeling the original one and training the model for better alignment in a supervised manner.
We propose Hindsight Instruction Relabeling (HIR), a novel algorithm for aligning language models with instructions.
arXiv Detail & Related papers (2023-02-10T12:16:38Z) - A Survey of Learning-based Automated Program Repair [12.09968472868107]
Automated program repair (APR) aims to fix software bugs automatically and plays a crucial role in software development and maintenance.
With the recent advances in deep learning (DL), an increasing number of APR techniques have been proposed to leverage neural networks to learn bug-fixing patterns from massive open-source code repositories.
This paper provides a systematic survey to summarize the current state-of-the-art research in the learning-based APR community.
arXiv Detail & Related papers (2023-01-09T11:08:15Z) - Improving Automated Program Repair with Domain Adaptation [0.0]
Automated Program Repair (APR) is defined as the process of fixing a bug/defect in the source code, by an automated tool.
APR tools have recently experienced promising results by leveraging state-of-the-art Neural Language Processing (NLP) techniques.
arXiv Detail & Related papers (2022-12-21T23:52:09Z) - Lexically Aware Semi-Supervised Learning for OCR Post-Correction [90.54336622024299]
Much of the existing linguistic data in many languages of the world is locked away in non-digitized books and documents.
Previous work has demonstrated the utility of neural post-correction methods on recognition of less-well-resourced languages.
We present a semi-supervised learning method that makes it possible to utilize raw images to improve performance.
arXiv Detail & Related papers (2021-11-04T04:39:02Z) - CURE: Code-Aware Neural Machine Translation for Automatic Program Repair [11.556110575946631]
We propose CURE, a new NMT-based APR technique with three major novelties.
CURE pre-trains a programming language (PL) model on a large software to learn developer-like source code before the APR task.
Second, CURE designs a new code-aware search strategy that finds more correct fixes by focusing on compilable patches and patches that are close in length to the buggy code.
arXiv Detail & Related papers (2021-02-26T22:30:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.