CrossBeam: Learning to Search in Bottom-Up Program Synthesis
- URL: http://arxiv.org/abs/2203.10452v1
- Date: Sun, 20 Mar 2022 04:41:05 GMT
- Title: CrossBeam: Learning to Search in Bottom-Up Program Synthesis
- Authors: Kensen Shi, Hanjun Dai, Kevin Ellis, Charles Sutton
- Abstract summary: We propose training a neural model to learn a hands-on search policy for bottom-up synthesis.
Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs.
We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
- Score: 51.37514793318815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many approaches to program synthesis perform a search within an enormous
space of programs to find one that satisfies a given specification. Prior works
have used neural models to guide combinatorial search algorithms, but such
approaches still explore a huge portion of the search space and quickly become
intractable as the size of the desired program increases. To tame the search
space blowup, we propose training a neural model to learn a hands-on search
policy for bottom-up synthesis, instead of relying on a combinatorial search
algorithm. Our approach, called CrossBeam, uses the neural model to choose how
to combine previously-explored programs into new programs, taking into account
the search history and partial program executions. Motivated by work in
structured prediction on learning to search, CrossBeam is trained on-policy
using data extracted from its own bottom-up searches on training tasks. We
evaluate CrossBeam in two very different domains, string manipulation and logic
programming. We observe that CrossBeam learns to search efficiently, exploring
much smaller portions of the program space compared to the state-of-the-art.
Related papers
- LiteSearch: Efficacious Tree Search for LLM [70.29796112457662]
This study introduces a novel guided tree search algorithm with dynamic node selection and node-level exploration budget.
Experiments conducted on the GSM8K and TabMWP datasets demonstrate that our approach enjoys significantly lower computational costs compared to baseline methods.
arXiv Detail & Related papers (2024-06-29T05:14:04Z) - Bayesian Program Learning by Decompiling Amortized Knowledge [50.960612835957875]
We present a novel approach for library learning that directly leverages the neural search policy, effectively "decompiling" its amortized knowledge to extract relevant program components.
This provides stronger amortized inference: the amortized knowledge learnt to reduce search breadth is now also used to reduce search depth.
arXiv Detail & Related papers (2023-06-13T15:35:01Z) - RF-Next: Efficient Receptive Field Search for Convolutional Neural
Networks [86.6139619721343]
We propose to find better receptive field combinations through a global-to-local search scheme.
Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combinations.
Our RF-Next models, plugging receptive field search to various models, boost the performance on many tasks.
arXiv Detail & Related papers (2022-06-14T06:56:26Z) - Exploring Complicated Search Spaces with Interleaving-Free Sampling [127.07551427957362]
In this paper, we build the search algorithm upon a complicated search space with long-distance connections.
We present a simple yet effective algorithm named textbfIF-NAS, where we perform a periodic sampling strategy to construct different sub-networks.
In the proposed search space, IF-NAS outperform both random sampling and previous weight-sharing search algorithms by a significant margin.
arXiv Detail & Related papers (2021-12-05T06:42:48Z) - Fast Line Search for Multi-Task Learning [0.0]
We propose a novel idea for line search algorithms in multi-task learning.
The idea is to use latent representation space instead of parameter space for finding step size.
We compare this idea with classical backtracking and gradient methods with a constant learning rate on MNIST, CIFAR-10, Cityscapes tasks.
arXiv Detail & Related papers (2021-10-02T21:02:29Z) - Task-Aware Neural Architecture Search [33.11791812491669]
We propose a novel framework for neural architecture search, utilizing a dictionary of models of base tasks and the similarity between the target task and the atoms of the dictionary.
By introducing a gradient-based search algorithm, we can evaluate and discover the best architecture in the search space without fully training the networks.
arXiv Detail & Related papers (2020-10-27T00:10:40Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.