Guided Sketch-Based Program Induction by Search Gradients
- URL: http://arxiv.org/abs/2402.06990v1
- Date: Sat, 10 Feb 2024 16:47:53 GMT
- Title: Guided Sketch-Based Program Induction by Search Gradients
- Authors: Ahmad Ayaz Amin
- Abstract summary: We propose a framework for learning parameterized programs via search gradients using evolution strategies.
This formulation departs from traditional program induction as it allows for the programmer to impart task-specific code to the program'sketch'
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many tasks can be easily solved using machine learning techniques. However,
some tasks cannot readily be solved using statistical models, requiring a
symbolic approach instead. Program induction is one of the ways that such tasks
can be solved by means of capturing an interpretable and generalizable
algorithm through training. However, contemporary approaches to program
induction are not sophisticated enough to readily be applied to various types
of tasks as they tend to be formulated as a single, all-encompassing model,
usually parameterized by neural networks. In an attempt to make program
induction a viable solution for many scenarios, we propose a framework for
learning parameterized programs via search gradients using evolution
strategies. This formulation departs from traditional program induction as it
allows for the programmer to impart task-specific code to the program 'sketch',
while also enjoying the benefits of accelerated learning through end-to-end
gradient-based optimization.
Related papers
- Mathematical Programming For Adaptive Experiments [7.948144726705323]
We present a mathematical programming view of adaptive experimentation that can flexibly incorporate a wide range of objectives, constraints, and statistical procedures.
We evaluate our framework on benchmarks modeled after practical challenges such as non-stationarity, personalization, multi-objectives, and constraints.
arXiv Detail & Related papers (2024-08-08T16:29:09Z) - Self-adaptive algorithms for quasiconvex programming and applications to
machine learning [0.0]
We provide a self-adaptive step-size strategy that does not include convex line-search techniques and a generic approach under mild assumptions.
The proposed method is verified by preliminary results from some computational examples.
To demonstrate the effectiveness of the proposed technique for large-scale problems, we apply it to some experiments on machine learning.
arXiv Detail & Related papers (2022-12-13T05:30:29Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - From Perception to Programs: Regularize, Overparameterize, and Amortize [21.221244694737134]
We develop techniques for neurosymbolic program synthesis where perceptual input is first parsed by neural nets into a low-dimensional interpretable representation, which is then processed by a synthesized program.
We explore several techniques for relaxing the problem and jointly learning all modules end-to-end with gradient descent.
Collectedly this toolbox improves the stability of gradient-guided program search, and suggests ways of learning both how to perceive input as discrete abstractions, and how to symbolically process those abstractions as programs.
arXiv Detail & Related papers (2022-06-13T06:27:11Z) - Learning logic programs by combining programs [24.31242130341093]
We introduce an approach where we learn small non-separable programs and combine them.
We implement our approach in a constraint-driven ILP system.
Our experiments on multiple domains, including game playing and program synthesis, show that our approach can drastically outperform existing approaches.
arXiv Detail & Related papers (2022-06-01T10:07:37Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z) - Regularizing Meta-Learning via Gradient Dropout [102.29924160341572]
meta-learning models are prone to overfitting when there are no sufficient training tasks for the meta-learners to generalize.
We introduce a simple yet effective method to alleviate the risk of overfitting for gradient-based meta-learning.
arXiv Detail & Related papers (2020-04-13T10:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.