Understanding Boolean Function Learnability on Deep Neural Networks
- URL: http://arxiv.org/abs/2009.05908v2
- Date: Wed, 16 Jun 2021 19:50:28 GMT
- Title: Understanding Boolean Function Learnability on Deep Neural Networks
- Authors: Anderson R. Tavares, Pedro Avelar, Jo\~ao M. Flach, Marcio Nicolau,
Luis C. Lamb, Moshe Vardi
- Abstract summary: Computational learning theory states that many classes of formulas are learnable in time.
This paper addresses the understudied subject of how, in practice, such formulas can be learned by deep neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational learning theory states that many classes of boolean formulas
are learnable in polynomial time. This paper addresses the understudied subject
of how, in practice, such formulas can be learned by deep neural networks.
Specifically, we analyse boolean formulas associated with the decision version
of combinatorial optimisation problems, model sampling benchmarks, and random
3-CNFs with varying degrees of constrainedness. Our extensive experiments
indicate that: (i) regardless of the combinatorial optimisation problem,
relatively small and shallow neural networks are very good approximators of the
associated formulas; (ii) smaller formulas seem harder to learn, possibly due
to the fewer positive (satisfying) examples available; and (iii) interestingly,
underconstrained 3-CNF formulas are more challenging to learn than
overconstrained ones. Source code and relevant datasets are publicly available
(https://github.com/machine-reasoning-ufrgs/mlbf).
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Efficient, probabilistic analysis of combinatorial neural codes [0.0]
neural networks encode inputs in the form of combinations of individual neurons' activities.
These neural codes present a computational challenge due to their high dimensionality and often large volumes of data.
We apply methods previously applied to small examples and apply them to large neural codes generated by experiments.
arXiv Detail & Related papers (2022-10-19T11:58:26Z) - Benchmarking Compositionality with Formal Languages [64.09083307778951]
We investigate whether large neural models in NLP can acquire the ability tocombining primitive concepts into larger novel combinations while learning from data.
By randomly sampling over many transducers, we explore which of their properties contribute to learnability of a compositional relation by a neural network.
We find that the models either learn the relations completely or not at all. The key is transition coverage, setting a soft learnability limit at 400 examples per transition.
arXiv Detail & Related papers (2022-08-17T10:03:18Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Proof of the Theory-to-Practice Gap in Deep Learning via Sampling
Complexity bounds for Neural Network Approximation Spaces [5.863264019032882]
We study the computational complexity of (deterministic or randomized) algorithms based on approximating or integrating functions.
One of the most important problems in this field concerns the question of whether it is possible to realize theoretically provable neural network approximation rates.
arXiv Detail & Related papers (2021-04-06T18:55:20Z) - PAC-learning gains of Turing machines over circuits and neural networks [1.4502611532302039]
We study the potential gains in sample efficiency that can bring in the principle of minimum description length.
We use Turing machines to represent universal models and circuits.
We highlight close relationships between classical open problems in Circuit Complexity and the tightness of these.
arXiv Detail & Related papers (2021-03-23T17:03:10Z) - ODEN: A Framework to Solve Ordinary Differential Equations using
Artificial Neural Networks [0.0]
We prove a specific loss function, which does not require knowledge of the exact solution, to evaluate neural networks' performance.
Neural networks are shown to be proficient at approximating continuous solutions within their training domains.
A user-friendly and adaptable open-source code (ODE$mathcalN$) is provided on GitHub.
arXiv Detail & Related papers (2020-05-28T15:34:10Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.