BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration
- URL: http://arxiv.org/abs/2007.14381v3
- Date: Thu, 30 Sep 2021 13:50:43 GMT
- Title: BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration
- Authors: Augustus Odena, Kensen Shi, David Bieber, Rishabh Singh, Charles
Sutton, Hanjun Dai
- Abstract summary: We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
- Score: 72.88493072196094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Program synthesis is challenging largely because of the difficulty of search
in a large space of programs. Human programmers routinely tackle the task of
writing complex programs by writing sub-programs and then analyzing their
intermediate results to compose them in appropriate ways. Motivated by this
intuition, we present a new synthesis approach that leverages learning to guide
a bottom-up search over programs. In particular, we train a model to prioritize
compositions of intermediate values during search conditioned on a given set of
input-output examples. This is a powerful combination because of several
emergent properties. First, in bottom-up search, intermediate programs can be
executed, providing semantic information to the neural network. Second, given
the concrete values from those executions, we can exploit rich features based
on recent work on property signatures. Finally, bottom-up search allows the
system substantial flexibility in what order to generate the solution, allowing
the synthesizer to build up a program from multiple smaller sub-programs.
Overall, our empirical evaluation finds that the combination of learning and
bottom-up search is remarkably effective, even with simple supervised learning
approaches. We demonstrate the effectiveness of our technique on two datasets,
one from the SyGuS competition and one of our own creation.
Related papers
- IPSynth: Interprocedural Program Synthesis for Software Security Implementation [3.1119394814248253]
We introduce IP Synth, a novel inter-procedural program synthesis approach that automatically learns the specification of the tactic.
Our results show that our approach can accurately locate corresponding spots in the program, synthesize needed code snippets, add them to the program, and outperform ChatGPT in inter-procedural tactic synthesis tasks.
arXiv Detail & Related papers (2024-03-16T07:12:24Z) - ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis [54.18659323181771]
We characterize several different forms of compositional generalization that are desirable in program synthesis.
We propose ExeDec, a novel decomposition-based strategy that predicts execution subgoals to solve problems step-by-step informed by program execution at each step.
arXiv Detail & Related papers (2023-07-26T01:07:52Z) - A Divide-Align-Conquer Strategy for Program Synthesis [8.595181704811889]
We show that compositional segmentation can be applied in the programming by examples setting to divide the search for large programs across multiple smaller program synthesis problems.
A structural alignment of the constituent parts in the input and output leads to pairwise correspondences used to guide the program search.
arXiv Detail & Related papers (2023-01-08T19:10:55Z) - CrossBeam: Learning to Search in Bottom-Up Program Synthesis [51.37514793318815]
We propose training a neural model to learn a hands-on search policy for bottom-up synthesis.
Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs.
We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
arXiv Detail & Related papers (2022-03-20T04:41:05Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - Representing Partial Programs with Blended Abstract Semantics [62.20775388513027]
We introduce a technique for representing partially written programs in a program synthesis engine.
We learn an approximate execution model implemented as a modular neural network.
We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs.
arXiv Detail & Related papers (2020-12-23T20:40:18Z) - Latent Programmer: Discrete Latent Codes for Program Synthesis [56.37993487589351]
In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences.
We propose to learn representations of the outputs that are specifically meant for search: rich enough to specify the desired output but compact enough to make search more efficient.
We introduce the emphLatent Programmer, a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language.
arXiv Detail & Related papers (2020-12-01T10:11:35Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.