Neural Program Synthesis with a Differentiable Fixer
- URL: http://arxiv.org/abs/2006.10924v1
- Date: Fri, 19 Jun 2020 01:49:46 GMT
- Title: Neural Program Synthesis with a Differentiable Fixer
- Authors: Matej Balog, Rishabh Singh, Petros Maniatis, Charles Sutton
- Abstract summary: We present a new program synthesis approach that combines an encoder-decoder based synthesis architecture with a differentiable program fixer.
We train our architecture end-to-end on the RobustFill domain, and show that the addition of the fixer module leads to a significant improvement on synthesis accuracy.
- Score: 44.48509453344902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new program synthesis approach that combines an encoder-decoder
based synthesis architecture with a differentiable program fixer. Our approach
is inspired from the fact that human developers seldom get their program
correct on the first attempt, and perform iterative testing-based program
fixing to get to the desired program functionality. Similarly, our approach
first learns a distribution over programs conditioned on an encoding of a set
of input-output examples, and then iteratively performs fix operations using
the differentiable fixer. The fixer takes as input the original examples and
the current program's outputs on example inputs, and generates a new
distribution over the programs with the goal of reducing the discrepancies
between the current program outputs and the desired example outputs. We train
our architecture end-to-end on the RobustFill domain, and show that the
addition of the fixer module leads to a significant improvement on synthesis
accuracy compared to using beam search.
Related papers
- Hierarchical Neural Program Synthesis [19.94176152035497]
Program synthesis aims to automatically construct human-readable programs that satisfy given task specifications.
We present a scalable program synthesis framework that instead synthesizes a program by hierarchically composing programs.
We extensively evaluate our proposed framework in a string transformation domain with input/output pairs.
arXiv Detail & Related papers (2023-03-09T18:20:07Z) - Learning from Self-Sampled Correct and Partially-Correct Programs [96.66452896657991]
We propose to let the model perform sampling during training and learn from both self-sampled fully-correct programs and partially-correct programs.
We show that our use of self-sampled correct and partially-correct programs can benefit learning and help guide the sampling process.
Our proposed method improves the pass@k performance by 3.1% to 12.3% compared to learning from a single reference program with MLE.
arXiv Detail & Related papers (2022-05-28T03:31:07Z) - Recent Developments in Program Synthesis with Evolutionary Algorithms [1.8047694351309207]
We identify the relevant evolutionary program synthesis approaches and provide an in-depth analysis of their performance.
The most influential approaches we identify are stack-based, grammar-guided, as well as linear genetic programming.
For future work, we encourage researchers not only to use a program's output for assessing the quality of a solution but also the way towards a solution.
arXiv Detail & Related papers (2021-08-27T11:38:27Z) - Latent Execution for Neural Program Synthesis Beyond Domain-Specific
Languages [97.58968222942173]
We take the first step to synthesize C programs from input-output examples.
In particular, we propose La Synth, which learns the latent representation to approximate the execution of partially generated programs.
We show that training on these synthesized programs further improves the prediction performance for both Karel and C program synthesis.
arXiv Detail & Related papers (2021-06-29T02:21:32Z) - Learning to Combine Per-Example Solutions for Neural Program Synthesis [35.0204840620086]
Most learning-based approaches try to find a program that satisfies all examples at once.
Our work considers an approach that breaks the problem into two stages: (a) find programs that satisfy only one example, and (b) leverage these per-example solutions to yield a program that satisfies all examples.
arXiv Detail & Related papers (2021-06-14T05:48:12Z) - Latent Programmer: Discrete Latent Codes for Program Synthesis [56.37993487589351]
In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences.
We propose to learn representations of the outputs that are specifically meant for search: rich enough to specify the desired output but compact enough to make search more efficient.
We introduce the emphLatent Programmer, a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language.
arXiv Detail & Related papers (2020-12-01T10:11:35Z) - Synthesize, Execute and Debug: Learning to Repair for Neural Program
Synthesis [81.54148730967394]
We propose SED, a neural program generation framework that incorporates synthesis, execution, and debug stages.
SED first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs.
On Karel, a challenging input-output program synthesis benchmark, SED reduces the error rate of the neural program synthesizer itself by a considerable margin, and outperforms the standard beam search for decoding.
arXiv Detail & Related papers (2020-07-16T04:15:47Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.