NARRepair: Non-Autoregressive Code Generation Model for Automatic Program Repair
- URL: http://arxiv.org/abs/2406.16526v1
- Date: Mon, 24 Jun 2024 11:04:28 GMT
- Title: NARRepair: Non-Autoregressive Code Generation Model for Automatic Program Repair
- Authors: Zhenyu Yang, Zhen Yang, Zhongxing Yu,
- Abstract summary: Non-Autoregressive(NAR) method can output target code in a parallel manner to avoid huge inference delays.
We propose NARRepair, the first customized NAR code generation model for the APR task.
The NARRepair features three major novelties, including 1) using repair actions to alleviate the over-correction issue, 2) extracting dependency information from AST to alleviate the issue of lacking inter-word dependency information, and 3) employing two-stage decoding to alleviate the issue of lacking contextual information.
- Score: 8.77021401961262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advancement of deep learning techniques, the performance of Automatic Program Repair(APR) techniques has reached a new level. Previous deep learning-based APR techniques essentially modified program sentences in the Autoregressive(AR) manner, which predicts future values based on past values. Due to the manner of word-by-word generation, the AR-based APR technique has a huge time delay. This negative consequence overshadows the widespread adoption of APR techniques in real-life software development. To address the issue, we aim to apply the Non-Autoregressive(NAR) method to the APR task, which can output target code in a parallel manner to avoid huge inference delays. To effectively adapt the NAR manner for the APR task, we in this paper propose NARRepair, the first customized NAR code generation model for the APR task. The NARRepair features three major novelties, including 1) using repair actions to alleviate the over-correction issue, 2) extracting dependency information from AST to alleviate the issue of lacking inter-word dependency information, 3) employing two-stage decoding to alleviate the issue of lacking contextual information. We evaluated NARRepair on three widely used datasets in the APR community, and the results show that our technique can significantly improve the inference speed while maintaining high repair accuracy.
Related papers
- Failing Forward: Improving Generative Error Correction for ASR with Synthetic Data and Retrieval Augmentation [73.9145653659403]
We show that Generative Error Correction models struggle to generalize beyond the specific types of errors encountered during training.
We propose DARAG, a novel approach designed to improve GEC for ASR in in-domain (ID) and OOD scenarios.
Our approach is simple, scalable, and both domain- and language-agnostic.
arXiv Detail & Related papers (2024-10-17T04:00:29Z) - The Impact of Program Reduction on Automated Program Repair [0.3277163122167433]
We describe a program repair approach that aims to improve the scalability of modern APR tools.
We investigate slicing's impact on all three phases of the repair process: fault localization, patch generation, and patch validation.
We conclude that program reduction can improve the performance of APR without degrading repair quality, but this improvement is not universal.
arXiv Detail & Related papers (2024-08-02T09:23:45Z) - Distilling and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections [45.420679219101245]
We present Distillation and Retrieval of Online Corrections (DROC)
DROC is a large language model (LLM)-based system that can respond to arbitrary forms of language feedback.
We demonstrate that DROC effectively distills the relevant information from the sequence of online corrections in a knowledge base.
arXiv Detail & Related papers (2023-11-17T18:00:20Z) - A Survey of Learning-based Automated Program Repair [12.09968472868107]
Automated program repair (APR) aims to fix software bugs automatically and plays a crucial role in software development and maintenance.
With the recent advances in deep learning (DL), an increasing number of APR techniques have been proposed to leverage neural networks to learn bug-fixing patterns from massive open-source code repositories.
This paper provides a systematic survey to summarize the current state-of-the-art research in the learning-based APR community.
arXiv Detail & Related papers (2023-01-09T11:08:15Z) - Improving Automated Program Repair with Domain Adaptation [0.0]
Automated Program Repair (APR) is defined as the process of fixing a bug/defect in the source code, by an automated tool.
APR tools have recently experienced promising results by leveraging state-of-the-art Neural Language Processing (NLP) techniques.
arXiv Detail & Related papers (2022-12-21T23:52:09Z) - Retrieval-Augmented Reinforcement Learning [63.32076191982944]
We train a network to map a dataset of past experiences to optimal behavior.
The retrieval process is trained to retrieve information from the dataset that may be useful in the current context.
We show that retrieval-augmented R2D2 learns significantly faster than the baseline R2D2 agent and achieves higher scores.
arXiv Detail & Related papers (2022-02-17T02:44:05Z) - FastCorrect: Fast Error Correction with Edit Alignment for Automatic
Speech Recognition [90.34177266618143]
We propose FastCorrect, a novel NAR error correction model based on edit alignment.
FastCorrect speeds up the inference by 6-9 times and maintains the accuracy (8-14% WER reduction) compared with the autoregressive correction model.
It outperforms the accuracy of popular NAR models adopted in neural machine translation by a large margin.
arXiv Detail & Related papers (2021-05-09T05:35:36Z) - CURE: Code-Aware Neural Machine Translation for Automatic Program Repair [11.556110575946631]
We propose CURE, a new NMT-based APR technique with three major novelties.
CURE pre-trains a programming language (PL) model on a large software to learn developer-like source code before the APR task.
Second, CURE designs a new code-aware search strategy that finds more correct fixes by focusing on compilable patches and patches that are close in length to the buggy code.
arXiv Detail & Related papers (2021-02-26T22:30:28Z) - FastLR: Non-Autoregressive Lipreading Model with Integrate-and-Fire [74.04394069262108]
We propose FastLR, a non-autoregressive (NAR) lipreading model which generates all target tokens simultaneously.
FastLR achieves the speedup up to 10.97$times$ compared with state-of-the-art lipreading model.
arXiv Detail & Related papers (2020-08-06T08:28:56Z) - A Study of Non-autoregressive Model for Sequence Generation [147.89525760170923]
Non-autoregressive (NAR) models generate all the tokens of a sequence in parallel.
We propose knowledge distillation and source-target alignment to bridge the gap between AR and NAR models.
arXiv Detail & Related papers (2020-04-22T09:16:09Z) - Improving Readability for Automatic Speech Recognition Transcription [50.86019112545596]
We propose a novel NLP task called ASR post-processing for readability (APR)
APR aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker.
We compare fine-tuned models based on several open-sourced and adapted pre-trained models with the traditional pipeline method.
arXiv Detail & Related papers (2020-04-09T09:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.