Asymptotic tensor rank is characterized by polynomials
- URL: http://arxiv.org/abs/2411.15789v1
- Date: Sun, 24 Nov 2024 11:35:38 GMT
- Title: Asymptotic tensor rank is characterized by polynomials
- Authors: Matthias Christandl, Koen Hoeberechts, Harold Nieuwboer, Péter Vrana, Jeroen Zuiddam,
- Abstract summary: Strassen's rank conjecture makes the bold claim that tensor rank equals the largest dimension of a tensor.
We prove that rank is "computable from above", that is, for any real $r$ there is an (efficient) algorithm that determines, given a tensor $T$, if the tensor rank of $T$ is at most $r$.
- Score: 9.730863921742644
- License:
- Abstract: Asymptotic tensor rank is notoriously difficult to determine. Indeed, determining its value for the $2\times 2$ matrix multiplication tensor would determine the matrix multiplication exponent, a long-standing open problem. On the other hand, Strassen's asymptotic rank conjecture makes the bold claim that asymptotic tensor rank equals the largest dimension of the tensor and is thus as easy to compute as matrix rank. Despite tremendous interest, much is still unknown about the structural and computational properties of asymptotic rank; for instance whether it is computable. We prove that asymptotic tensor rank is "computable from above", that is, for any real number $r$ there is an (efficient) algorithm that determines, given a tensor $T$, if the asymptotic tensor rank of $T$ is at most $r$. The algorithm has a simple structure; it consists of evaluating a finite list of polynomials on the tensor. Indeed, we prove that the sublevel sets of asymptotic rank are Zariski-closed (just like matrix rank). While we do not exhibit these polynomials explicitly, their mere existence has strong implications on the structure of asymptotic rank. As one such implication, we find that the values that asymptotic tensor rank takes, on all tensors, is a well-ordered set. In other words, any non-increasing sequence of asymptotic ranks stabilizes ("discreteness from above"). In particular, for the matrix multiplication exponent (which is an asymptotic rank) there is no sequence of exponents of bilinear maps that approximates it arbitrarily closely from above without being eventually constant. In other words, any upper bound on the matrix multiplication exponent that is close enough, will "snap" to it. Previously such discreteness results were only known for finite fields or for other tensor parameters (e.g., asymptotic slice rank). We obtain them for infinite fields like the complex numbers.
Related papers
- Overcomplete Tensor Decomposition via Koszul-Young Flattenings [63.01248796170617]
We give a new algorithm for decomposing an $n_times n times n_3$ tensor as the sum of a minimal number of rank-1 terms.
We show that an even more general class of degree-$d$s cannot surpass rank $Cn$ for a constant $C = C(d)$.
arXiv Detail & Related papers (2024-11-21T17:41:09Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - The Tensor as an Informational Resource [1.3044677039636754]
A tensor is a multidimensional array of numbers that can be used to store data, encode a computational relation and represent quantum entanglement.
We propose a family of information-theoretically constructed preorders on tensors, which can be used to compare tensors with each other and to assess the existence of transformations between them.
arXiv Detail & Related papers (2023-11-03T18:47:39Z) - On the Accuracy of Hotelling-Type Asymmetric Tensor Deflation: A Random
Tensor Analysis [14.809070996761013]
Hotelling-type tensor deflation is studied in the presence of noise.
We analytically characterize the estimated singular values and the alignment of estimated and true singular vectors at each step of the deflation procedure.
arXiv Detail & Related papers (2023-10-28T14:40:10Z) - Discreteness of asymptotic tensor ranks [7.916635054977068]
We show that the subrank and the slice rank have no accumulation points, and that over the complex numbers, the slice rank has no accumulation points.
Central to our approach are two new general lower bounds on the subrank of tensors, which measures how much a tensor can be diagonalized.
Our rely on new lower bounds on the maximum rank in matrix subspaces that are obtained by slicing a three-tensor in the three different directions.
arXiv Detail & Related papers (2023-06-02T17:42:39Z) - A Gap in the Subrank of Tensors [2.7992435001846827]
The subrank of tensors is a measure of how much a tensor can be ''diagonalized''
We prove that there is a gap in the subrank when taking large powers under the tensor product.
We also prove that there is a second gap in the possible rates of growth.
arXiv Detail & Related papers (2022-12-03T18:38:28Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Asymptotic Tensor Powers of Banach Spaces [77.34726150561087]
We show that Euclidean spaces are characterized by the property that their tensor radius equals their dimension.
We also show that the tensor radius of an operator whose domain or range is Euclidean is equal to its nuclear norm.
arXiv Detail & Related papers (2021-10-25T11:51:12Z) - Reconstruction Algorithms for Low-Rank Tensors and Depth-3 Multilinear
Circuits [4.129484350382891]
We give new and efficient black-box reconstruction algorithms for some classes of depth$3$ arithmetic circuits.
Our algorithm works over all fields characteristic 0 or large enough characteristic.
arXiv Detail & Related papers (2021-05-04T20:45:07Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.