Efficient reconstruction of depth three circuits with top fan-in two
- URL: http://arxiv.org/abs/2103.07445v1
- Date: Fri, 12 Mar 2021 18:19:34 GMT
- Title: Efficient reconstruction of depth three circuits with top fan-in two
- Authors: Gaurav Sinha
- Abstract summary: We develop efficient randomized algorithms to solve the black-box reconstruction problem fors over finite fields.
Ours is the first blackbox reconstruction algorithm for this circuit class, that runs in time in $log |mathbbF|$.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop efficient randomized algorithms to solve the black-box
reconstruction problem for polynomials over finite fields, computable by depth
three arithmetic circuits with alternating addition/multiplication gates, such
that output gate is an addition gate with in-degree two. These circuits compute
polynomials of form $G\times(T_1 + T_2)$, where $G,T_1,T_2$ are product of
affine forms, and polynomials $T_1,T_2$ have no common factors. Rank of such a
circuit is defined as dimension of vector space spanned by all affine factors
of $T_1$ and $T_2$. For any polynomial $f$ computable by such a circuit,
$rank(f)$ is defined to be the minimum rank of any such circuit computing it.
Our work develops randomized reconstruction algorithms which take as input
black-box access to a polynomial $f$ (over finite field $\mathbb{F}$),
computable by such a circuit. Here are the results.
1 [Low rank]: When $5\leq rank(f) = O(\log^3 d)$, it runs in time
$(nd^{\log^3d}\log |\mathbb{F}|)^{O(1)}$, and, with high probability, outputs a
depth three circuit computing $f$, with top addition gate having in-degree
$\leq d^{rank(f)}$.
2 [High rank]: When $rank(f) = \Omega(\log^3 d)$, it runs in time $(nd\log
|\mathbb{F}|)^{O(1)}$, and, with high probability, outputs a depth three
circuit computing $f$, with top addition gate having in-degree two.
Ours is the first blackbox reconstruction algorithm for this circuit class,
that runs in time polynomial in $\log |\mathbb{F}|$. This problem has been
mentioned as an open problem in [GKL12] (STOC 2012)
Related papers
- Learning Hierarchical Polynomials with Three-Layer Neural Networks [56.71223169861528]
We study the problem of learning hierarchical functions over the standard Gaussian distribution with three-layer neural networks.
For a large subclass of degree $k$s $p$, a three-layer neural network trained via layerwise gradientp descent on the square loss learns the target $h$ up to vanishing test error.
This work demonstrates the ability of three-layer neural networks to learn complex features and as a result, learn a broad class of hierarchical functions.
arXiv Detail & Related papers (2023-11-23T02:19:32Z) - On the Pauli Spectrum of QAC0 [2.3436632098950456]
We conjecture that the Pauli spectrum of $mathsfQAC0$ satisfies low-degree concentration.
We obtain new circuit lower bounds and learning results as applications.
arXiv Detail & Related papers (2023-11-16T07:25:06Z) - Classical simulation of peaked shallow quantum circuits [2.6089354079273512]
We describe an algorithm with quasipolynomial runtime $nO(logn)$ that samples from the output distribution of a peaked constant-depth circuit.
Our algorithms can be used to estimate output probabilities of shallow circuits to within a given inverse-polynomial additive error.
arXiv Detail & Related papers (2023-09-15T14:01:13Z) - Improved Synthesis of Toffoli-Hadamard Circuits [1.7205106391379026]
We show that a technique introduced by Kliuchnikov in 2013 for Clifford+$T$ circuits can be straightforwardly adapted to Toffoli-Hadamard circuits.
We also present an alternative synthesis method of similarly improved cost, but whose application is restricted to circuits on no more than three qubits.
arXiv Detail & Related papers (2023-05-18T21:02:20Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - The Approximate Degree of DNF and CNF Formulas [95.94432031144716]
For every $delta>0,$ we construct CNF and formulas of size with approximate degree $Omega(n1-delta),$ essentially matching the trivial upper bound of $n.
We show that for every $delta>0$, these models require $Omega(n1-delta)$, $Omega(n/4kk2)1-delta$, and $Omega(n/4kk2)1-delta$, respectively.
arXiv Detail & Related papers (2022-09-04T10:01:39Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Low-degree learning and the metric entropy of polynomials [49.1574468325115]
We prove that any (deterministic or randomized) algorithm which learns $mathscrF_nd$ with $L$-accuracy $varepsilon$ requires at least $Omega(sqrtvarepsilon)2dlog n leq log mathsfM(mathscrF_n,d,|cdot|_L,varepsilon) satisfies the two-sided estimate $$c (1-varepsilon)2dlog
arXiv Detail & Related papers (2022-03-17T23:52:08Z) - Reconstruction Algorithms for Low-Rank Tensors and Depth-3 Multilinear
Circuits [4.129484350382891]
We give new and efficient black-box reconstruction algorithms for some classes of depth$3$ arithmetic circuits.
Our algorithm works over all fields characteristic 0 or large enough characteristic.
arXiv Detail & Related papers (2021-05-04T20:45:07Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Learning sums of powers of low-degree polynomials in the non-degenerate
case [2.6109033135086777]
We give a learning algorithm for an arithmetic circuit model from a lower bound for the same model, provided certain non-degeneracy conditions hold.
Our algorithm is based on a scheme for obtaining a learning algorithm for an arithmetic circuit model from a lower bound for the same model.
arXiv Detail & Related papers (2020-04-15T06:18:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.