Incorporating domain knowledge into neural-guided search
- URL: http://arxiv.org/abs/2107.09182v1
- Date: Mon, 19 Jul 2021 22:34:43 GMT
- Title: Incorporating domain knowledge into neural-guided search
- Authors: Brenden K. Petersen, Claudio P. Santiago, Mikel Landajuela Larma
- Abstract summary: AutoML problems involve optimizing discrete objects under a black-box reward.
Neural-guided search provides a flexible means of searching these spaces using an autoregressive recurrent neural network.
We formalize a framework for incorporating such in situ priors and constraints into neural-guided search.
- Score: 3.1542695050861544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many AutoML problems involve optimizing discrete objects under a black-box
reward. Neural-guided search provides a flexible means of searching these
combinatorial spaces using an autoregressive recurrent neural network. A major
benefit of this approach is that builds up objects sequentially--this provides
an opportunity to incorporate domain knowledge into the search by directly
modifying the logits emitted during sampling. In this work, we formalize a
framework for incorporating such in situ priors and constraints into
neural-guided search, and provide sufficient conditions for enforcing
constraints. We integrate several priors and constraints from existing works
into this framework, propose several new ones, and demonstrate their efficacy
in informing the task of symbolic regression.
Related papers
- Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Learning with Algorithmic Supervision via Continuous Relaxations [19.437400671428737]
We propose an approach that allows to integrate algorithms into end-to-end trainable neural network architectures.
To obtain meaningful gradients, each relevant variable is perturbed via logistic distributions.
We evaluate the proposed continuous relaxation model on four challenging tasks and show that it can keep up with relaxations specifically designed for each individual task.
arXiv Detail & Related papers (2021-10-11T23:52:42Z) - Sensitive Samples Revisited: Detecting Neural Network Attacks Using
Constraint Solvers [0.0]
Neural Networks are used in numerous security- and safety-relevant domains.
They are a popular target of attacks that subvert their classification capabilities.
In this paper we offer an alternative, using symbolic constraint solvers.
arXiv Detail & Related papers (2021-09-07T01:34:02Z) - Improving exploration in policy gradient search: Application to symbolic
optimization [6.344988093245026]
Many machine learning strategies leverage neural networks to search large spaces of mathematical symbols.
In contrast to traditional evolutionary approaches, using a neural network at the core of the search allows learning higher-level symbolic patterns.
We show that these techniques can improve the performance, increase sample efficiency, and lower the complexity of solutions for the task of symbolic regression.
arXiv Detail & Related papers (2021-07-19T21:11:07Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Sequential Transfer in Reinforcement Learning with a Generative Model [48.40219742217783]
We show how to reduce the sample complexity for learning new tasks by transferring knowledge from previously-solved ones.
We derive PAC bounds on its sample complexity which clearly demonstrate the benefits of using this kind of prior knowledge.
We empirically verify our theoretical findings in simple simulated domains.
arXiv Detail & Related papers (2020-07-01T19:53:35Z) - Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic
Question Generation [10.324402925019946]
A major obstacle to the wide-spread adoption of neural retrieval models is that they require large supervised training sets to surpass traditional term-based techniques.
In this paper, we propose an approach to zero-shot learning for passage retrieval that uses synthetic question generation to close this gap.
arXiv Detail & Related papers (2020-04-29T22:21:31Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.