Optimal Neural Program Synthesis from Multimodal Specifications
- URL: http://arxiv.org/abs/2010.01678v2
- Date: Tue, 14 Sep 2021 18:07:13 GMT
- Title: Optimal Neural Program Synthesis from Multimodal Specifications
- Authors: Xi Ye, Qiaochu Chen, Isil Dillig, Greg Durrett
- Abstract summary: Multimodal program synthesis is an attractive way to scale program synthesis to challenging settings.
This paper proposes an optimal neural synthesis approach where the goal is to find a program that satisfies user-provided constraints.
- Score: 45.35689345004124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimodal program synthesis, which leverages different types of user input
to synthesize a desired program, is an attractive way to scale program
synthesis to challenging settings; however, it requires integrating noisy
signals from the user, like natural language, with hard constraints on the
program's behavior. This paper proposes an optimal neural synthesis approach
where the goal is to find a program that satisfies user-provided constraints
while also maximizing the program's score with respect to a neural model.
Specifically, we focus on multimodal synthesis tasks in which the user intent
is expressed using a combination of natural language (NL) and input-output
examples. At the core of our method is a top-down recurrent neural model that
places distributions over abstract syntax trees conditioned on the NL input.
This model not only allows for efficient search over the space of syntactically
valid programs, but it allows us to leverage automated program analysis
techniques for pruning the search space based on infeasibility of partial
programs with respect to the user's constraints. The experimental results on a
multimodal synthesis dataset (StructuredRegex) show that our method
substantially outperforms prior state-of-the-art techniques in terms of
accuracy and efficiency, and finds model-optimal programs more frequently.
Related papers
- HYSYNTH: Context-Free LLM Approximation for Guiding Program Synthesis [25.260063704712458]
Large language models (LLMs) often fail to produce fully correct programs in unfamiliar DSLs.
Motivated by these limitations, we introduce a hybrid approach, where LLM completions for a given task are used to learn a task-specific, context-free surrogate model.
We evaluate this hybrid approach on three domains, and show that it outperforms both unguided search and direct sampling from LLMs, as well as existing program synthesizers.
arXiv Detail & Related papers (2024-05-24T18:45:51Z) - A Conversational Paradigm for Program Synthesis [110.94409515865867]
We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
arXiv Detail & Related papers (2022-03-25T06:55:15Z) - Latent Execution for Neural Program Synthesis Beyond Domain-Specific
Languages [97.58968222942173]
We take the first step to synthesize C programs from input-output examples.
In particular, we propose La Synth, which learns the latent representation to approximate the execution of partially generated programs.
We show that training on these synthesized programs further improves the prediction performance for both Karel and C program synthesis.
arXiv Detail & Related papers (2021-06-29T02:21:32Z) - Representing Partial Programs with Blended Abstract Semantics [62.20775388513027]
We introduce a technique for representing partially written programs in a program synthesis engine.
We learn an approximate execution model implemented as a modular neural network.
We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs.
arXiv Detail & Related papers (2020-12-23T20:40:18Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Synthesize, Execute and Debug: Learning to Repair for Neural Program
Synthesis [81.54148730967394]
We propose SED, a neural program generation framework that incorporates synthesis, execution, and debug stages.
SED first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs.
On Karel, a challenging input-output program synthesis benchmark, SED reduces the error rate of the neural program synthesizer itself by a considerable margin, and outperforms the standard beam search for decoding.
arXiv Detail & Related papers (2020-07-16T04:15:47Z) - Program Synthesis with Pragmatic Communication [28.24612900419843]
This work introduces a new inductive bias derived by modeling the program synthesis task as rational communication.
A user study finds that end-user participants communicate more effectively with the pragmatic program synthesizer over a non-pragmatic one.
arXiv Detail & Related papers (2020-07-09T20:55:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.