Efficient Bottom-Up Synthesis for Programs with Local Variables
- URL: http://arxiv.org/abs/2311.03705v1
- Date: Tue, 7 Nov 2023 04:02:52 GMT
- Title: Efficient Bottom-Up Synthesis for Programs with Local Variables
- Authors: Xiang Li and Xiangyu Zhou and Rui Dong and Yihong Zhang and Xinyu Wang
- Abstract summary: Our algorithm can efficiently search programs with local variables.
Lifted interpretation provides a mechanism to enumerate all binding contexts for local variables.
Our ideas are instantiated in the domain of web automation.
- Score: 7.387053183440393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new synthesis algorithm that can efficiently search programs
with local variables (e.g., those introduced by lambdas). Prior bottom-up
synthesis algorithms are not able to evaluate programs with free local
variables, and therefore cannot effectively reduce the search space of such
programs (e.g., using standard observational equivalence reduction techniques),
making synthesis slow. Our algorithm can reduce the space of programs with
local variables. The key idea, dubbed lifted interpretation, is to lift up the
program interpretation process, from evaluating one program at a time to
simultaneously evaluating all programs from a grammar. Lifted interpretation
provides a mechanism to systematically enumerate all binding contexts for local
variables, thereby enabling us to evaluate and reduce the space of programs
with local variables. Our ideas are instantiated in the domain of web
automation. The resulting tool, Arborist, can automate a significantly broader
range of challenging tasks more efficiently than state-of-the-art techniques
including WebRobot and Helena.
Related papers
- Searching Latent Program Spaces [0.0]
We propose an algorithm for program induction that learns a distribution over latent programs in a continuous space, enabling efficient search and test-time adaptation.
We show that can generalize beyond its training distribution and adapt to unseen tasks by utilizing test-time adaptation mechanisms.
arXiv Detail & Related papers (2024-11-13T15:50:32Z) - Guess & Sketch: Language Model Guided Transpilation [59.02147255276078]
Learned transpilation offers an alternative to manual re-writing and engineering efforts.
Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness.
Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence.
arXiv Detail & Related papers (2023-09-25T15:42:18Z) - Improved Tree Search for Automatic Program Synthesis [91.3755431537592]
A key element is being able to perform an efficient search in the space of valid programs.
Here, we suggest a variant of MCTS that leads to state of the art results on two vastly different DSLs.
arXiv Detail & Related papers (2023-03-13T15:09:52Z) - A Divide-Align-Conquer Strategy for Program Synthesis [8.595181704811889]
We show that compositional segmentation can be applied in the programming by examples setting to divide the search for large programs across multiple smaller program synthesis problems.
A structural alignment of the constituent parts in the input and output leads to pairwise correspondences used to guide the program search.
arXiv Detail & Related papers (2023-01-08T19:10:55Z) - Iterative Genetic Improvement: Scaling Stochastic Program Synthesis [11.195558777385259]
Program synthesis aims to it automatically find programs from an underlying programming language that satisfy a given specification.
Existing program synthesis techniques do not meet this expectation very well, suffering from the scalability issue.
Here we propose a new framework for program synthesis, called iterative genetic improvement to overcome this problem.
arXiv Detail & Related papers (2022-02-26T02:00:35Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Latent Execution for Neural Program Synthesis Beyond Domain-Specific
Languages [97.58968222942173]
We take the first step to synthesize C programs from input-output examples.
In particular, we propose La Synth, which learns the latent representation to approximate the execution of partially generated programs.
We show that training on these synthesized programs further improves the prediction performance for both Karel and C program synthesis.
arXiv Detail & Related papers (2021-06-29T02:21:32Z) - Representing Partial Programs with Blended Abstract Semantics [62.20775388513027]
We introduce a technique for representing partially written programs in a program synthesis engine.
We learn an approximate execution model implemented as a modular neural network.
We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs.
arXiv Detail & Related papers (2020-12-23T20:40:18Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.