Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each
using ChatGPT
- URL: http://arxiv.org/abs/2304.00385v1
- Date: Sat, 1 Apr 2023 20:57:33 GMT
- Title: Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each
using ChatGPT
- Authors: Chunqiu Steven Xia, Lingming Zhang
- Abstract summary: Automated Program Repair (APR) aims to automatically generate patches for buggy programs.
Recent APR work has been focused on leveraging modern Large Language Models (LLMs) to directly generate patches for APR.
We propose ChatRepair, the first fully automated conversation-driven APR approach.
- Score: 10.071615423169902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated Program Repair (APR) aims to automatically generate patches for
buggy programs. Recent APR work has been focused on leveraging modern Large
Language Models (LLMs) to directly generate patches for APR. Such LLM-based APR
tools work by first constructing an input prompt built using the original buggy
code and then queries the LLM to generate patches. While the LLM-based APR
tools are able to achieve state-of-the-art results, it still follows the
classic Generate and Validate repair paradigm of first generating lots of
patches and then validating each one afterwards. This not only leads to many
repeated patches that are incorrect but also miss the crucial information in
test failures as well as in plausible patches.
To address these limitations, we propose ChatRepair, the first fully
automated conversation-driven APR approach that interleaves patch generation
with instant feedback to perform APR in a conversational style. ChatRepair
first feeds the LLM with relevant test failure information to start with, and
then learns from both failures and successes of earlier patching attempts of
the same bug for more powerful APR. For earlier patches that failed to pass all
tests, we combine the incorrect patches with their corresponding relevant test
failure information to construct a new prompt for the LLM to generate the next
patch. In this way, we can avoid making the same mistakes. For earlier patches
that passed all the tests, we further ask the LLM to generate alternative
variations of the original plausible patches. In this way, we can further build
on and learn from earlier successes to generate more plausible patches to
increase the chance of having correct patches. While our approach is general,
we implement ChatRepair using state-of-the-art dialogue-based LLM -- ChatGPT.
By calculating the cost of accessing ChatGPT, we can fix 162 out of 337 bugs
for \$0.42 each!
Related papers
- MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators [53.91199933655421]
Large Language Models (LLMs) have shown significant potential as judges for Machine Translation (MT) quality assessment.
We introduce a universal and training-free framework, $textbfMQM-APE, to enhance the quality of error annotations predicted by LLM evaluators.
arXiv Detail & Related papers (2024-09-22T06:43:40Z) - Hybrid Automated Program Repair by Combining Large Language Models and Program Analysis [12.7034916462208]
Automated Program Repair (APR) has garnered significant attention due to its potential to streamline the bug repair process for human developers.
This paper introduces an innovative APR approach called GIANTREPAIR.
Based on this insight, GIANTREPAIR first constructs patch skeletons from LLM-generated patches to confine the patch space, and then generates high-quality patches tailored to specific programs.
arXiv Detail & Related papers (2024-06-03T05:05:12Z) - ContrastRepair: Enhancing Conversation-Based Automated Program Repair
via Contrastive Test Case Pairs [23.419180504723546]
ContrastRepair is a novel APR approach that augments conversation-driven APR by providing contrastive test pairs.
We evaluate ContrastRepair on multiple benchmark datasets, including Defects4j, QuixBugs, and HumanEval-Java.
arXiv Detail & Related papers (2024-03-04T12:15:28Z) - A Novel Approach for Automatic Program Repair using Round-Trip
Translation with Large Language Models [50.86686630756207]
Research shows that grammatical mistakes in a sentence can be corrected by translating it to another language and back.
Current generative models for Automatic Program Repair (APR) are pre-trained on source code and fine-tuned for repair.
This paper proposes bypassing the fine-tuning step and using Round-Trip Translation (RTT): translation of code from one programming language to another programming or natural language, and back.
arXiv Detail & Related papers (2024-01-15T22:36:31Z) - The Earth is Flat? Unveiling Factual Errors in Large Language Models [89.94270049334479]
Large Language Models (LLMs) like ChatGPT are in various applications due to their extensive knowledge from pre-training and fine-tuning.
Despite this, they are prone to generating factual and commonsense errors, raising concerns in critical areas like healthcare, journalism, and education.
We introduce a novel, automatic testing framework, FactChecker, aimed at uncovering factual inaccuracies in LLMs.
arXiv Detail & Related papers (2024-01-01T14:02:27Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - RAP-Gen: Retrieval-Augmented Patch Generation with CodeT5 for Automatic
Program Repair [75.40584530380589]
We propose a novel Retrieval-Augmented Patch Generation framework (RAP-Gen)
RAP-Gen explicitly leveraging relevant fix patterns retrieved from a list of previous bug-fix pairs.
We evaluate RAP-Gen on three benchmarks in two programming languages, including the TFix benchmark in JavaScript, and Code Refinement and Defects4J benchmarks in Java.
arXiv Detail & Related papers (2023-09-12T08:52:56Z) - Allies: Prompting Large Language Model with Beam Search [107.38790111856761]
In this work, we propose a novel method called ALLIES.
Given an input query, ALLIES leverages LLMs to iteratively generate new queries related to the original query.
By iteratively refining and expanding the scope of the original query, ALLIES captures and utilizes hidden knowledge that may not be directly through retrieval.
arXiv Detail & Related papers (2023-05-24T06:16:44Z) - Conversational Automated Program Repair [10.071615423169902]
We propose a new paradigm for program repair that alternates between patch generation and validation in a conversational manner.
We leverage the long-term context window of Large Pre-Trained Language Models to not only avoid generating previously incorrect patches but also incorporate validation feedback to help the model understand the semantic meaning of the program under test.
arXiv Detail & Related papers (2023-01-30T19:22:36Z) - Test-based Patch Clustering for Automatically-Generated Patches Assessment [21.051652050359852]
Overfitting happens when a patch is run and the test suite does not reveal any error, but the patch actually does not fix the underlying bug or it introduces a new defect that is not covered by the test suite.
Our work aims to minimize the number of plausible patches that programmers have to review, thereby reducing the time required to find a correct patch.
We introduce a novel light-weight test-based patch clustering approach called xTestCluster, which clusters patches based on their dynamic behavior.
arXiv Detail & Related papers (2022-07-22T13:39:27Z) - Exploring Plausible Patches Using Source Code Embeddings in JavaScript [1.3327130030147563]
We trained a Doc2Vec model on an open-source JavaScript project and generated 465 patches for 10 bugs in it.
These plausible patches alongside with the developer fix are then ranked based on their similarity to the original program.
We analyzed these similarity lists and found that plain document embeddings may lead to misclassification.
arXiv Detail & Related papers (2021-03-31T06:57:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.