Synchromesh: Reliable code generation from pre-trained language models
- URL: http://arxiv.org/abs/2201.11227v1
- Date: Wed, 26 Jan 2022 22:57:44 GMT
- Title: Synchromesh: Reliable code generation from pre-trained language models
- Authors: Gabriel Poesia, Oleksandr Polozov, Vu Le, Ashish Tiwari, Gustavo
Soares, Christopher Meek, Sumit Gulwani
- Abstract summary: We propose Synchromesh: a framework for substantially improving the reliability of pre-trained models for code generation.
First, it retrieves few-shot examples from a training bank using Target Similarity Tuning (TST), a novel method for semantic example selection.
Then, Synchromesh feeds the examples to a pre-trained language model and samples programs using Constrained Semantic Decoding (CSD), a general framework for constraining the output to a set of valid programs in the target language.
- Score: 38.15391794443022
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large pre-trained language models have been used to generate code,providing a
flexible interface for synthesizing programs from natural language
specifications. However, they often violate syntactic and semantic rules of
their output language, limiting their practical usability. In this paper, we
propose Synchromesh: a framework for substantially improving the reliability of
pre-trained models for code generation. Synchromesh comprises two components.
First, it retrieves few-shot examples from a training bank using Target
Similarity Tuning (TST), a novel method for semantic example selection. TST
learns to recognize utterances that describe similar target programs despite
differences in surface natural language features. Then, Synchromesh feeds the
examples to a pre-trained language model and samples programs using Constrained
Semantic Decoding (CSD): a general framework for constraining the output to a
set of valid programs in the target language. CSD leverages constraints on
partial outputs to sample complete correct programs, and needs neither
re-training nor fine-tuning of the language model. We evaluate our methods by
synthesizing code from natural language descriptions using GPT-3 and Codex in
three real-world languages: SQL queries, Vega-Lite visualizations and SMCalFlow
programs. These domains showcase rich constraints that CSD is able to enforce,
including syntax, scope, typing rules, and contextual logic. We observe
substantial complementary gains from CSD and TST in prediction accuracy and in
effectively preventing run-time errors.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.