Learning to Combine Per-Example Solutions for Neural Program Synthesis
- URL: http://arxiv.org/abs/2106.07175v1
- Date: Mon, 14 Jun 2021 05:48:12 GMT
- Title: Learning to Combine Per-Example Solutions for Neural Program Synthesis
- Authors: Disha Shrivastava, Hugo Larochelle, Daniel Tarlow
- Abstract summary: Most learning-based approaches try to find a program that satisfies all examples at once.
Our work considers an approach that breaks the problem into two stages: (a) find programs that satisfy only one example, and (b) leverage these per-example solutions to yield a program that satisfies all examples.
- Score: 35.0204840620086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of program synthesis from examples is to find a computer program
that is consistent with a given set of input-output examples. Most
learning-based approaches try to find a program that satisfies all examples at
once. Our work, by contrast, considers an approach that breaks the problem into
two stages: (a) find programs that satisfy only one example, and (b) leverage
these per-example solutions to yield a program that satisfies all examples. We
introduce the Cross Aggregator neural network module based on a multi-head
attention mechanism that learns to combine the cues present in these
per-example solutions to synthesize a global solution. Evaluation across
programs of different lengths and under two different experimental settings
reveal that when given the same time budget, our technique significantly
improves the success rate over PCCoder arXiv:1809.04682v2 [cs.LG] and other
ablation baselines. The code, data and trained models for our work can be found
at https://github.com/shrivastavadisha/N-PEPS.
Related papers
- Generating Pragmatic Examples to Train Neural Program Synthesizers [20.819451354452085]
A good synthesizer must choose the intended program from the many that are consistent with the given set of examples.
We propose a novel way to amortize this search with neural networks.
arXiv Detail & Related papers (2023-11-09T20:53:00Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates
of Programs [14.940174578659603]
We present a methodology for sampling datasets to train neural-network-based surrogates of programs.
We first characterize the proportion of data to sample from each region of a program's input space based on the complexity of learning a surrogate of the corresponding execution path.
We evaluate these results on a range of real-world programs, demonstrating that complexity-guided sampling results in empirical improvements in accuracy.
arXiv Detail & Related papers (2023-09-21T01:59:20Z) - Learning from Self-Sampled Correct and Partially-Correct Programs [96.66452896657991]
We propose to let the model perform sampling during training and learn from both self-sampled fully-correct programs and partially-correct programs.
We show that our use of self-sampled correct and partially-correct programs can benefit learning and help guide the sampling process.
Our proposed method improves the pass@k performance by 3.1% to 12.3% compared to learning from a single reference program with MLE.
arXiv Detail & Related papers (2022-05-28T03:31:07Z) - Recent Developments in Program Synthesis with Evolutionary Algorithms [1.8047694351309207]
We identify the relevant evolutionary program synthesis approaches and provide an in-depth analysis of their performance.
The most influential approaches we identify are stack-based, grammar-guided, as well as linear genetic programming.
For future work, we encourage researchers not only to use a program's output for assessing the quality of a solution but also the way towards a solution.
arXiv Detail & Related papers (2021-08-27T11:38:27Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Neural Program Synthesis with a Differentiable Fixer [44.48509453344902]
We present a new program synthesis approach that combines an encoder-decoder based synthesis architecture with a differentiable program fixer.
We train our architecture end-to-end on the RobustFill domain, and show that the addition of the fixer module leads to a significant improvement on synthesis accuracy.
arXiv Detail & Related papers (2020-06-19T01:49:46Z) - Creating Synthetic Datasets via Evolution for Neural Program Synthesis [77.34726150561087]
We show that some program synthesis approaches generalize poorly to data distributions different from that of the randomly generated examples.
We propose a new, adversarial approach to control the bias of synthetic data distributions and show that it outperforms current approaches.
arXiv Detail & Related papers (2020-03-23T18:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.