Reconstruction Algorithms for Low-Rank Tensors and Depth-3 Multilinear
Circuits
- URL: http://arxiv.org/abs/2105.01751v1
- Date: Tue, 4 May 2021 20:45:07 GMT
- Title: Reconstruction Algorithms for Low-Rank Tensors and Depth-3 Multilinear
Circuits
- Authors: Vishwas Bhargava, Shubhangi Saraf, Ilya Volkovich
- Abstract summary: We give new and efficient black-box reconstruction algorithms for some classes of depth$3$ arithmetic circuits.
Our algorithm works over all fields characteristic 0 or large enough characteristic.
- Score: 4.129484350382891
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We give new and efficient black-box reconstruction algorithms for some
classes of depth-$3$ arithmetic circuits. As a consequence, we obtain the first
efficient algorithm for computing the tensor rank and for finding the optimal
tensor decomposition as a sum of rank-one tensors when then input is a
constant-rank tensor. More specifically, we provide efficient learning
algorithms that run in randomized polynomial time over general fields and in
deterministic polynomial time over the reals and the complex numbers for the
following classes:
(1) Set-multilinear depth-$3$ circuits of constant top fan-in
$\Sigma\Pi\Sigma\{\sqcup_j X_j\}(k)$ circuits). As a consequence of our
algorithm, we obtain the first polynomial time algorithm for tensor rank
computation and optimal tensor decomposition of constant-rank tensors. This
result holds for $d$ dimensional tensors for any $d$, but is interesting even
for $d=3$.
(2) Sums of powers of constantly many linear forms ($\Sigma\wedge\Sigma$
circuits). As a consequence we obtain the first polynomial-time algorithm for
tensor rank computation and optimal tensor decomposition of constant-rank
symmetric tensors.
(3) Multilinear depth-3 circuits of constant top fan-in (multilinear
$\Sigma\Pi\Sigma(k)$ circuits). Our algorithm works over all fields of
characteristic 0 or large enough characteristic. Prior to our work the only
efficient algorithms known were over polynomially-sized finite fields (see.
Karnin-Shpilka 09').
Prior to our work, the only polynomial-time or even subexponential-time
algorithms known (deterministic or randomized) for subclasses of
$\Sigma\Pi\Sigma(k)$ circuits that also work over large/infinite fields were
for the setting when the top fan-in $k$ is at most $2$ (see Sinha 16' and Sinha
20').
Related papers
- Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Learning Polynomial Transformations [41.30404402331856]
We consider the problem of learning high dimensional quadratic transformations of Gaussians.
Our results extend to any-invariant input distribution, not just Gaussian.
We also give the first decomposition-time algorithms with provable guarantees for tensor ring decomposition.
arXiv Detail & Related papers (2022-04-08T17:59:31Z) - A Robust Spectral Algorithm for Overcomplete Tensor Decomposition [10.742675209112623]
We give a spectral algorithm for decomposing overcomplete order-4 tensors.
Our algorithm is robust to adversarial perturbations of bounded spectral norm.
Our algorithm avoids semidefinite programming and may be implemented as a series of basic linear-algebraic operations.
arXiv Detail & Related papers (2022-03-05T17:25:37Z) - Fast algorithm for overcomplete order-3 tensor decomposition [7.638467133689933]
We develop the first fast spectral algorithm to decompose a random third-order tensor over Rd of rank up to O(d3/2/polylog(d))
Our algorithm can recover all components in time O(d6.05) under the current matrix multiplication time.
arXiv Detail & Related papers (2022-02-14T00:31:34Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Streaming Coresets for Symmetric Tensor Factorization [9.181791777532608]
We show how to factorize tensors efficiently in the streaming setting.
We introduce two novel algorithmic techniques: online filtering and kernelization.
We show applications of our algorithms in learning single topic modeling.
arXiv Detail & Related papers (2020-06-01T19:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.