Natural Language to Code Translation with Execution
- URL: http://arxiv.org/abs/2204.11454v1
- Date: Mon, 25 Apr 2022 06:06:08 GMT
- Title: Natural Language to Code Translation with Execution
- Authors: Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida
I. Wang
- Abstract summary: Execution result--minimum Bayes risk decoding for program selection.
We show that it improves the few-shot performance of pretrained code models on natural-language-to-code tasks.
- Score: 82.52142893010563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative models of code, pretrained on large corpora of programs, have
shown great success in translating natural language to code (Chen et al., 2021;
Austin et al., 2021; Li et al., 2022, inter alia). While these models do not
explicitly incorporate program semantics (i.e., execution results) during
training, they are able to generate correct solutions for many problems.
However, choosing a single correct program from among a generated set for each
problem remains challenging. In this work, we introduce execution result--based
minimum Bayes risk decoding (MBR-EXEC) for program selection and show that it
improves the few-shot performance of pretrained code models on
natural-language-to-code tasks. We select output programs from a generated
candidate set by marginalizing over program implementations that share the same
semantics. Because exact equivalence is intractable, we execute each program on
a small number of test inputs to approximate semantic equivalence. Across
datasets, execution or simulated execution significantly outperforms the
methods that do not involve program semantics. We find that MBR-EXEC
consistently improves over all execution-unaware selection methods, suggesting
it as an effective approach for natural language to code translation.
Related papers
- Learning to Reason via Program Generation, Emulation, and Search [33.11955431589091]
Program synthesis with language models (LMs) has unlocked a large set of reasoning abilities.
Not all reasoning tasks are easily expressible as code, e.g. tasks involving commonsense reasoning, moral decision-making, and sarcasm understanding.
We propose Code Generation and Emulated EXecution (CoGEX) to extend an LM's program synthesis skills to such tasks.
arXiv Detail & Related papers (2024-05-25T19:40:50Z) - NExT: Teaching Large Language Models to Reason about Code Execution [50.93581376646064]
Large language models (LLMs) of code are typically trained on the surface textual form of programs.
We propose NExT, a method to teach LLMs to inspect the execution traces of programs and reason about their run-time behavior.
arXiv Detail & Related papers (2024-04-23T01:46:32Z) - Understanding Programs by Exploiting (Fuzzing) Test Cases [26.8259045248779]
We propose to incorporate the relationship between inputs and possible outputs/behaviors into learning, for achieving a deeper semantic understanding of programs.
To obtain inputs that are representative enough to trigger the execution of most part of the code, we resort to fuzz testing and propose fuzz tuning.
The effectiveness of the proposed method is verified on two program understanding tasks including code clone detection and code classification, and it outperforms current state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-05-23T01:51:46Z) - LEVER: Learning to Verify Language-to-Code Generation with Execution [64.36459105535]
We propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results.
Specifically, we train verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results.
LEVER consistently improves over the base code LLMs(4.6% to 10.9% with code-davinci) and achieves new state-of-the-art results on all of them.
arXiv Detail & Related papers (2023-02-16T18:23:22Z) - Interactive Code Generation via Test-Driven User-Intent Formalization [60.90035204567797]
Large language models (LLMs) produce code from informal natural language (NL) intent.
It is hard to define a notion of correctness since natural language can be ambiguous and lacks a formal semantics.
We describe a language-agnostic abstract algorithm and a concrete implementation TiCoder.
arXiv Detail & Related papers (2022-08-11T17:41:08Z) - Fault-Aware Neural Code Rankers [64.41888054066861]
We propose fault-aware neural code rankers that can predict the correctness of a sampled program without executing it.
Our fault-aware rankers can significantly increase the pass@1 accuracy of various code generation models.
arXiv Detail & Related papers (2022-06-04T22:01:05Z) - AVATAR: A Parallel Corpus for Java-Python Program Translation [77.86173793901139]
Program translation refers to migrating source code from one language to another.
We present AVATAR, a collection of 9,515 programming problems and their solutions written in two popular languages, Java and Python.
arXiv Detail & Related papers (2021-08-26T05:44:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.