Learning Differentiable Programs with Admissible Neural Heuristics
- URL: http://arxiv.org/abs/2007.12101v5
- Date: Sun, 28 Mar 2021 01:15:33 GMT
- Title: Learning Differentiable Programs with Admissible Neural Heuristics
- Authors: Ameesh Shah, Eric Zhan, Jennifer J. Sun, Abhinav Verma, Yisong Yue,
Swarat Chaudhuri
- Abstract summary: We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
- Score: 43.54820901841979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning differentiable functions expressed as
programs in a domain-specific language. Such programmatic models can offer
benefits such as composability and interpretability; however, learning them
requires optimizing over a combinatorial space of program "architectures". We
frame this optimization problem as a search in a weighted graph whose paths
encode top-down derivations of program syntax. Our key innovation is to view
various classes of neural networks as continuous relaxations over the space of
programs, which can then be used to complete any partial program. This relaxed
program is differentiable and can be trained end-to-end, and the resulting
training loss is an approximately admissible heuristic that can guide the
combinatorial search. We instantiate our approach on top of the A-star
algorithm and an iteratively deepened branch-and-bound search, and use these
algorithms to learn programmatic classifiers in three sequence classification
tasks. Our experiments show that the algorithms outperform state-of-the-art
methods for program learning, and that they discover programmatic classifiers
that yield natural interpretations and achieve competitive accuracy.
Related papers
- Learning with Differentiable Algorithms [6.47243430672461]
This thesis explores combining classic algorithms and machine learning systems like neural networks.
The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm.
In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable sorting gates, and differentiable logic gate networks.
arXiv Detail & Related papers (2022-09-01T17:30:00Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Learning compositional programs with arguments and sampling [12.790055619773565]
We train a machine learning model to discover a program that satisfies specific requirements.
We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments.
arXiv Detail & Related papers (2021-09-01T21:27:41Z) - Meta-Learning an Inference Algorithm for Probabilistic Programs [13.528656805820459]
We present a meta-algorithm for learning a posterior-inference algorithm for restricted probabilistic programs.
Key feature of our approach is the use of a white-box inference algorithm that extracts information directly from model descriptions.
arXiv Detail & Related papers (2021-03-01T04:05:11Z) - Representing Partial Programs with Blended Abstract Semantics [62.20775388513027]
We introduce a technique for representing partially written programs in a program synthesis engine.
We learn an approximate execution model implemented as a modular neural network.
We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs.
arXiv Detail & Related papers (2020-12-23T20:40:18Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Strong Generalization and Efficiency in Neural Programs [69.18742158883869]
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes.
arXiv Detail & Related papers (2020-07-07T17:03:02Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.