Discreteness of asymptotic tensor ranks
- URL: http://arxiv.org/abs/2306.01718v3
- Date: Tue, 24 Sep 2024 08:21:53 GMT
- Title: Discreteness of asymptotic tensor ranks
- Authors: Jop Briƫt, Matthias Christandl, Itai Leigh, Amir Shpilka, Jeroen Zuiddam,
- Abstract summary: We show that the subrank and the slice rank have no accumulation points, and that over the complex numbers, the slice rank has no accumulation points.
Central to our approach are two new general lower bounds on the subrank of tensors, which measures how much a tensor can be diagonalized.
Our rely on new lower bounds on the maximum rank in matrix subspaces that are obtained by slicing a three-tensor in the three different directions.
- Score: 7.916635054977068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor parameters that are amortized or regularized over large tensor powers, often called "asymptotic" tensor parameters, play a central role in several areas including algebraic complexity theory (constructing fast matrix multiplication algorithms), quantum information (entanglement cost and distillable entanglement), and additive combinatorics (bounds on cap sets, sunflower-free sets, etc.). Examples are the asymptotic tensor rank, asymptotic slice rank and asymptotic subrank. Recent works (Costa-Dalai, Blatter-Draisma-Rupniewski, Christandl-Gesmundo-Zuiddam) have investigated notions of discreteness (no accumulation points) or "gaps" in the values of such tensor parameters. We prove a general discreteness theorem for asymptotic tensor parameters of order-three tensors and use this to prove that (1) over any finite field (and in fact any finite set of coefficients in any field), the asymptotic subrank and the asymptotic slice rank have no accumulation points, and (2) over the complex numbers, the asymptotic slice rank has no accumulation points. Central to our approach are two new general lower bounds on the asymptotic subrank of tensors, which measures how much a tensor can be diagonalized. The first lower bound says that the asymptotic subrank of any concise three-tensor is at least the cube-root of the smallest dimension. The second lower bound says that any concise three-tensor that is "narrow enough" (has one dimension much smaller than the other two) has maximal asymptotic subrank. Our proofs rely on new lower bounds on the maximum rank in matrix subspaces that are obtained by slicing a three-tensor in the three different directions. We prove that for any concise tensor, the product of any two such maximum ranks must be large, and as a consequence there are always two distinct directions with large max-rank.
Related papers
- Asymptotic tensor rank is characterized by polynomials [9.730863921742644]
Strassen's rank conjecture makes the bold claim that tensor rank equals the largest dimension of a tensor.
We prove that rank is "computable from above", that is, for any real $r$ there is an (efficient) algorithm that determines, given a tensor $T$, if the tensor rank of $T$ is at most $r$.
arXiv Detail & Related papers (2024-11-24T11:35:38Z) - Overcomplete Tensor Decomposition via Koszul-Young Flattenings [63.01248796170617]
We give a new algorithm for decomposing an $n_times n times n_3$ tensor as the sum of a minimal number of rank-1 terms.
We show that an even more general class of degree-$d$s cannot surpass rank $Cn$ for a constant $C = C(d)$.
arXiv Detail & Related papers (2024-11-21T17:41:09Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - The next gap in the subrank of 3-tensors [3.8073142980733]
We show that the subrank and slice rank of any nonzero 3-tensor is discrete (in several regimes)
We determine exactly the next gap, showing that the subrank and slice rank of any nonzero 3-tensor is discrete (in several regimes)
arXiv Detail & Related papers (2023-07-12T12:14:54Z) - A Gap in the Subrank of Tensors [2.7992435001846827]
The subrank of tensors is a measure of how much a tensor can be ''diagonalized''
We prove that there is a gap in the subrank when taking large powers under the tensor product.
We also prove that there is a second gap in the possible rates of growth.
arXiv Detail & Related papers (2022-12-03T18:38:28Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Persistent Tensors and Multiqudit Entanglement Transformation [0.07252027234425334]
We present three specific families of persistent tensors, of which the lower bound is tight.
We show that there is a chain of degenerations between these three families of minimal-rank persistent tensors that can be used to study the entanglement between them.
arXiv Detail & Related papers (2022-11-01T18:00:00Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Dimension of Tensor Network varieties [68.8204255655161]
We determine an upper bound on the dimension of the tensor network variety.
A refined upper bound is given in cases relevant for applications such as varieties of matrix product states and projected entangled pairs states.
arXiv Detail & Related papers (2021-01-08T18:24:50Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.