Fast algorithm for overcomplete order-3 tensor decomposition
- URL: http://arxiv.org/abs/2202.06442v1
- Date: Mon, 14 Feb 2022 00:31:34 GMT
- Title: Fast algorithm for overcomplete order-3 tensor decomposition
- Authors: Jingqiu Ding, Tommaso d'Orsi, Chih-Hung Liu, Stefan Tiegel, David
Steurer
- Abstract summary: We develop the first fast spectral algorithm to decompose a random third-order tensor over Rd of rank up to O(d3/2/polylog(d))
Our algorithm can recover all components in time O(d6.05) under the current matrix multiplication time.
- Score: 7.638467133689933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop the first fast spectral algorithm to decompose a random
third-order tensor over R^d of rank up to O(d^{3/2}/polylog(d)). Our algorithm
only involves simple linear algebra operations and can recover all components
in time O(d^{6.05}) under the current matrix multiplication time.
Prior to this work, comparable guarantees could only be achieved via
sum-of-squares [Ma, Shi, Steurer 2016]. In contrast, fast algorithms [Hopkins,
Schramm, Shi, Steurer 2016] could only decompose tensors of rank at most
O(d^{4/3}/polylog(d)).
Our algorithmic result rests on two key ingredients. A clean lifting of the
third-order tensor to a sixth-order tensor, which can be expressed in the
language of tensor networks. A careful decomposition of the tensor network into
a sequence of rectangular matrix multiplications, which allows us to have a
fast implementation of the algorithm.
Related papers
- Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - A Robust Spectral Algorithm for Overcomplete Tensor Decomposition [10.742675209112623]
We give a spectral algorithm for decomposing overcomplete order-4 tensors.
Our algorithm is robust to adversarial perturbations of bounded spectral norm.
Our algorithm avoids semidefinite programming and may be implemented as a series of basic linear-algebraic operations.
arXiv Detail & Related papers (2022-03-05T17:25:37Z) - Reconstruction Algorithms for Low-Rank Tensors and Depth-3 Multilinear
Circuits [4.129484350382891]
We give new and efficient black-box reconstruction algorithms for some classes of depth$3$ arithmetic circuits.
Our algorithm works over all fields characteristic 0 or large enough characteristic.
arXiv Detail & Related papers (2021-05-04T20:45:07Z) - Fast Tensor Disentangling Algorithm [0.0]
We introduce an approximate, fast, and simple algorithm to optimize disentangling unitary tensors.
Our algorithm is faster than previous iterative algorithms and often results in a residual entanglement entropy that is within 10 to 40% of the minimum.
arXiv Detail & Related papers (2021-04-16T18:00:01Z) - Exact nuclear norm, completion and decomposition for random overcomplete
tensors via degree-4 SOS [0.7233897166339269]
We show that simple semidefinite programs inspired by degree $4$ SOS can exactly solve the tensor nuclear norm, tensor decomposition, and tensor completion problems on tensors with random asymmetric components.
We show that w.h.p. these semidefinite programs can exactly find the nuclear norm and components of an $(ntimes ntimes n)$-tensor $mathcalT$ with $mleq n3/2/polylog(n)$ random asymmetric components.
arXiv Detail & Related papers (2020-11-18T17:27:36Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - What if Neural Networks had SVDs? [66.91160214071088]
Various Neural Networks employ time-consuming matrix operations like matrix inversion.
We present an algorithm that is fast enough to speed up several matrix operations.
arXiv Detail & Related papers (2020-09-29T12:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.