Better patching using LLM prompting, via Self-Consistency
- URL: http://arxiv.org/abs/2306.00108v2
- Date: Wed, 16 Aug 2023 21:28:58 GMT
- Title: Better patching using LLM prompting, via Self-Consistency
- Authors: Toufique Ahmed, Premkumar Devanbu
- Abstract summary: Self-consistency (S-C) is an exciting, substantially better technique for generating explanations for problems.
This paper describes an application of the S-C approach to program repair, using the commit log on the fix as the explanation.
We achieve state-of-the art results, beating previous approaches to prompting-based program repair on the MODIT dataset.
- Score: 5.892272127970584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language models (LLMs) can be induced to solve non-trivial problems
with "few-shot" prompts including illustrative problem-solution examples. Now
if the few-shots also include "chain of thought" (CoT) explanations, which are
of the form problem-explanation-solution, LLMs will generate a "explained"
solution, and perform even better. Recently an exciting, substantially better
technique, self-consistency [1] (S-C) has emerged, based on the intuition that
there are many plausible explanations for the right solution; when the LLM is
sampled repeatedly to generate a pool of explanation-solution pairs, for a
given problem, the most frequently occurring solutions in the pool (ignoring
the explanations) tend to be even more likely to be correct! Unfortunately, the
use of this highly-performant S-C (or even CoT) approach in software
engineering settings is hampered by the lack of explanations; most software
datasets lack explanations. In this paper, we describe an application of the
S-C approach to program repair, using the commit log on the fix as the
explanation, only in the illustrative few-shots. We achieve state-of-the art
results, beating previous approaches to prompting-based program repair, on the
MODIT dataset; we also find evidence suggesting that the correct commit
messages are helping the LLM learn to produce better patches.
Related papers
- Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification [52.095460362197336]
Large language models (LLMs) struggle with consistent and accurate reasoning.
LLMs are trained primarily on correct solutions, reducing their ability to detect and learn from errors.
We propose a novel collaborative method integrating Chain-of-Thought (CoT) and Program-of-Thought (PoT) solutions for verification.
arXiv Detail & Related papers (2024-10-05T05:21:48Z) - From Distributional to Overton Pluralism: Investigating Large Language Model Alignment [82.99849359892112]
We re-examine previously reported reductions in response diversity post-alignment.
Our analysis suggests that an apparent drop in the diversity of responses is largely explained by quality control and information aggregation.
Findings indicate that current alignment techniques capture but do not extend the useful subset of assistant-like base LLM behavior.
arXiv Detail & Related papers (2024-06-25T16:32:33Z) - Distilling Algorithmic Reasoning from LLMs via Explaining Solution Programs [2.3020018305241337]
Distilling explicit chain-of-thought reasoning paths has emerged as an effective method for improving the reasoning abilities of large language models.
We propose a novel approach to distill reasoning abilities from LLMs by leveraging their capacity to explain solutions.
Our experiments demonstrate that learning from explanations enables the Reasoner to more effectively guide program implementation by a Coder.
arXiv Detail & Related papers (2024-04-11T22:19:50Z) - Small Language Models Fine-tuned to Coordinate Larger Language Models
improve Complex Reasoning [41.03267013352519]
Large Language Models (LLMs) prompted to generate chain-of-thought exhibit impressive reasoning capabilities.
We introduce DaSLaM, which uses a decomposition generator to decompose complex problems into subproblems that require fewer reasoning steps.
We show that DaSLaM is not limited by the solver's capabilities as a function of scale.
arXiv Detail & Related papers (2023-10-21T15:23:20Z) - Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models [62.96551299003463]
We propose textbftextitThought Propagation (TP) to enhance the complex reasoning ability of Large Language Models.
TP first prompts LLMs to propose and solve a set of analogous problems that are related to the input one.
TP reuses the results of analogous problems to directly yield a new solution or derive a knowledge-intensive plan for execution to amend the initial solution obtained from scratch.
arXiv Detail & Related papers (2023-10-06T01:40:09Z) - GRACE: Discriminator-Guided Chain-of-Thought Reasoning [75.35436025709049]
We propose Guiding chain-of-thought ReAsoning with a CorrectnEss Discriminator (GRACE) to steer the decoding process towards producing correct reasoning steps.
GRACE employs a discriminator trained with a contrastive loss over correct and incorrect steps, which is used during decoding to score next-step candidates.
arXiv Detail & Related papers (2023-05-24T09:16:51Z) - RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by
Reversing Chain-of-Thought [56.558892336235914]
Reversing Chain-of-Thought (RCoT) is a novel method to improve large language models' reasoning abilities.
RCoT automatically detects and rectifys factual inconsistency in generated solutions.
We show that manually written fine-grained feedback can dramatically improve LLMs' reasoning abilities.
arXiv Detail & Related papers (2023-05-19T08:02:52Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.