Learning algorithms from circuit lower bounds
- URL: http://arxiv.org/abs/2012.14095v1
- Date: Mon, 28 Dec 2020 04:47:36 GMT
- Title: Learning algorithms from circuit lower bounds
- Authors: J\'an Pich
- Abstract summary: We revisit known constructions of efficient learning algorithms from various notions of constructive circuit lower bounds.
We prove that if it is possible to find efficiently, in a particular interactive way, errors of many p-size circuits attempting to solve hard problems, then p-size circuits can be PAC learned over the uniform distribution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit known constructions of efficient learning algorithms from various
notions of constructive circuit lower bounds such as distinguishers breaking
pseudorandom generators or efficient witnessing algorithms which find errors of
small circuits attempting to compute hard functions. As our main result we
prove that if it is possible to find efficiently, in a particular interactive
way, errors of many p-size circuits attempting to solve hard problems, then
p-size circuits can be PAC learned over the uniform distribution with
membership queries by circuits of subexponential size. The opposite implication
holds as well. This provides a new characterisation of learning algorithms and
extends the natural proofs barrier of Razborov and Rudich. The proof is based
on a method of exploiting Nisan-Wigderson generators introduced by
Kraj\'{i}\v{c}ek (2010) and used to analyze complexity of circuit lower bounds
in bounded arithmetic.
An interesting consequence of known constructions of learning algorithms from
circuit lower bounds is a learning speedup of Oliveira and Santhanam (2016). We
present an alternative proof of this phenomenon and discuss its potential to
advance the program of hardness magnification.
Related papers
- Functional Faithfulness in the Wild: Circuit Discovery with Differentiable Computation Graph Pruning [14.639036250438517]
We introduce a comprehensive reformulation of the task known as Circuit Discovery, along with DiscoGP.
DiscoGP is a novel and effective algorithm based on differentiable masking for discovering circuits.
arXiv Detail & Related papers (2024-07-04T09:42:25Z) - Finding Transformer Circuits with Edge Pruning [71.12127707678961]
We propose Edge Pruning as an effective and scalable solution to automated circuit discovery.
Our method finds circuits in GPT-2 that use less than half the number of edges compared to circuits found by previous methods.
Thanks to its efficiency, we scale Edge Pruning to CodeLlama-13B, a model over 100x the scale that prior methods operate on.
arXiv Detail & Related papers (2024-06-24T16:40:54Z) - Tractable Bounding of Counterfactual Queries by Knowledge Compilation [51.47174989680976]
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
arXiv Detail & Related papers (2023-10-05T07:10:40Z) - A Circuit Complexity Formulation of Algorithmic Information Theory [1.5483078145498086]
Inspired by Solomonoffs theory of inductive inference, we propose a prior based on circuit complexity.
We argue that an inductive bias towards simple explanations as measured by circuit complexity is appropriate for this problem.
arXiv Detail & Related papers (2023-06-25T01:30:37Z) - Gaussian Elimination versus Greedy Methods for the Synthesis of Linear
Reversible Circuits [0.0]
reversible circuits represent a subclass of reversible circuits with many applications in quantum computing.
We propose a new algorithm for any linear reversible operator by using an optimized version of the Gaussian elimination algorithm and a tuned LU factorization.
Overall, our algorithms improve the state-of-the-art methods for specific ranges of problem sizes.
arXiv Detail & Related papers (2022-01-17T16:31:42Z) - Sublinear Least-Squares Value Iteration via Locality Sensitive Hashing [49.73889315176884]
We present the first provable Least-Squares Value Iteration (LSVI) algorithms that have runtime complexity sublinear in the number of actions.
We build the connections between the theory of approximate maximum inner product search and the regret analysis of reinforcement learning.
arXiv Detail & Related papers (2021-05-18T05:23:53Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Efficient Permutation Discovery in Causal DAGs [9.22466799504763]
We introduce an efficient algorithm for finding sparse permutations of a directed acyclic graph.
We show that our method with depth $w$ runs in $O(pw+3)$ time.
We also compare our algorithm to provably consistent causal structure learning algorithms, such as the PC algorithm, GES, and GSP, and show that our method achieves comparable performance with a shorter runtime.
arXiv Detail & Related papers (2020-11-06T21:56:41Z) - Investigating the Scalability and Biological Plausibility of the
Activation Relaxation Algorithm [62.997667081978825]
Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm.
We show that the algorithm can be further simplified and made more biologically plausible by introducing a learnable set of backwards weights.
We also investigate whether another biologically implausible assumption of the original AR algorithm -- the frozen feedforward pass -- can be relaxed without damaging performance.
arXiv Detail & Related papers (2020-10-13T08:02:38Z) - ACSS-q: Algorithmic complexity for short strings via quantum accelerated
approach [1.4873907857806357]
We present a quantum circuit for estimating algorithmic complexity using the coding theorem method.
As a use-case, an application framework for protein-protein interaction based on algorithmic complexity is proposed.
arXiv Detail & Related papers (2020-09-18T14:41:41Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.