Scaling Neural Program Synthesis with Distribution-based Search
- URL: http://arxiv.org/abs/2110.12485v1
- Date: Sun, 24 Oct 2021 16:46:01 GMT
- Title: Scaling Neural Program Synthesis with Distribution-based Search
- Authors: Nathana\"el Fijalkow and Guillaume Lagarde and Th\'eo Matricon and
Kevin Ellis and Pierre Ohlmann and Akarsh Potta
- Abstract summary: We introduce two new search algorithms: Heap Search and SQRT Sampling.
We show how they integrate with probabilistic and neural techniques, and demonstrate how they can operate at scale across parallel compute environments.
- Score: 7.137293485620867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of automatically constructing computer programs from
input-output examples. We investigate how to augment probabilistic and neural
program synthesis methods with new search algorithms, proposing a framework
called distribution-based search. Within this framework, we introduce two new
search algorithms: Heap Search, an enumerative method, and SQRT Sampling, a
probabilistic method. We prove certain optimality guarantees for both methods,
show how they integrate with probabilistic and neural techniques, and
demonstrate how they can operate at scale across parallel compute environments.
Collectively these findings offer theoretical and applied studies of search
algorithms for program synthesis that integrate with recent developments in
machine-learned program synthesizers.
Related papers
- Searching Latent Program Spaces [0.0]
We propose an algorithm for program induction that learns a distribution over latent programs in a continuous space, enabling efficient search and test-time adaptation.
We show that can generalize beyond its training distribution and adapt to unseen tasks by utilizing test-time adaptation mechanisms.
arXiv Detail & Related papers (2024-11-13T15:50:32Z) - From Probabilistic Programming to Complexity-based Programming [0.5874142059884521]
The paper presents the main characteristics and a preliminary implementation of a novel computational framework named CompLog.
Inspired by probabilistic programming systems like ProbLog, CompLog builds upon the inferential mechanisms proposed by Simplicity Theory.
The proposed system enables users to compute ex-post and ex-ante measures of unexpectedness of a certain situation.
arXiv Detail & Related papers (2023-07-28T10:11:01Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - CrossBeam: Learning to Search in Bottom-Up Program Synthesis [51.37514793318815]
We propose training a neural model to learn a hands-on search policy for bottom-up synthesis.
Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs.
We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
arXiv Detail & Related papers (2022-03-20T04:41:05Z) - A Two-stage Framework and Reinforcement Learning-based Optimization
Algorithms for Complex Scheduling Problems [54.61091936472494]
We develop a two-stage framework, in which reinforcement learning (RL) and traditional operations research (OR) algorithms are combined together.
The scheduling problem is solved in two stages, including a finite Markov decision process (MDP) and a mixed-integer programming process, respectively.
Results show that the proposed algorithms could stably and efficiently obtain satisfactory scheduling schemes for agile Earth observation satellite scheduling problems.
arXiv Detail & Related papers (2021-03-10T03:16:12Z) - Meta-Learning an Inference Algorithm for Probabilistic Programs [13.528656805820459]
We present a meta-algorithm for learning a posterior-inference algorithm for restricted probabilistic programs.
Key feature of our approach is the use of a white-box inference algorithm that extracts information directly from model descriptions.
arXiv Detail & Related papers (2021-03-01T04:05:11Z) - Towards Large Scale Automated Algorithm Design by Integrating Modular
Benchmarking Frameworks [0.9281671380673306]
We present a first proof-of-concept use-case that demonstrates the efficiency of the algorithm framework ParadisEO with the automated algorithm configuration tool irace and the experimental platform IOHprofiler.
Key advantages of our pipeline are fast evaluation times, the possibility to generate rich data sets, and a standardized interface that can be used to benchmark very broad classes of sampling-based optimizations.
arXiv Detail & Related papers (2021-02-12T10:47:00Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.