A Gap in the Subrank of Tensors
- URL: http://arxiv.org/abs/2212.01668v2
- Date: Fri, 24 Nov 2023 11:21:07 GMT
- Title: A Gap in the Subrank of Tensors
- Authors: Matthias Christandl and Fulvio Gesmundo and Jeroen Zuiddam
- Abstract summary: The subrank of tensors is a measure of how much a tensor can be ''diagonalized''
We prove that there is a gap in the subrank when taking large powers under the tensor product.
We also prove that there is a second gap in the possible rates of growth.
- Score: 2.7992435001846827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The subrank of tensors is a measure of how much a tensor can be
''diagonalized''. This parameter was introduced by Strassen to study fast
matrix multiplication algorithms in algebraic complexity theory and is closely
related to many central tensor parameters (e.g. slice rank, partition rank,
analytic rank, geometric rank, G-stable rank) and problems in combinatorics,
computer science and quantum information theory. Strassen (J. Reine Angew.
Math., 1988) proved that there is a gap in the subrank when taking large powers
under the tensor product: either the subrank of all powers is at most one, or
it grows as a power of a constant strictly larger than one. In this paper, we
precisely determine this constant for tensors of any order. Additionally, for
tensors of order three, we prove that there is a second gap in the possible
rates of growth. Our results strengthen the recent work of Costa and Dalai (J.
Comb. Theory, Ser. A, 2021), who proved a similar gap for the slice rank. Our
theorem on the subrank has wider applications by implying such gaps not only
for the slice rank, but for any ``normalized monotone''. In order to prove the
main result, we characterize when a tensor has a very structured tensor (the
W-tensor) in its orbit closure. Our methods include degenerations in
Grassmanians, which may be of independent interest.
Related papers
- Asymptotic tensor rank is characterized by polynomials [9.730863921742644]
Strassen's rank conjecture makes the bold claim that tensor rank equals the largest dimension of a tensor.
We prove that rank is "computable from above", that is, for any real $r$ there is an (efficient) algorithm that determines, given a tensor $T$, if the tensor rank of $T$ is at most $r$.
arXiv Detail & Related papers (2024-11-24T11:35:38Z) - Overcomplete Tensor Decomposition via Koszul-Young Flattenings [63.01248796170617]
We give a new algorithm for decomposing an $n_times n times n_3$ tensor as the sum of a minimal number of rank-1 terms.
We show that an even more general class of degree-$d$s cannot surpass rank $Cn$ for a constant $C = C(d)$.
arXiv Detail & Related papers (2024-11-21T17:41:09Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - The Tensor as an Informational Resource [1.3044677039636754]
A tensor is a multidimensional array of numbers that can be used to store data, encode a computational relation and represent quantum entanglement.
We propose a family of information-theoretically constructed preorders on tensors, which can be used to compare tensors with each other and to assess the existence of transformations between them.
arXiv Detail & Related papers (2023-11-03T18:47:39Z) - Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting [64.0722630873758]
We consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Differential Equation.
We prove the bound in $W_2$-distance between the laws of our Ito chain and differential equation.
arXiv Detail & Related papers (2023-10-09T18:38:56Z) - Discreteness of asymptotic tensor ranks [7.916635054977068]
We show that the subrank and the slice rank have no accumulation points, and that over the complex numbers, the slice rank has no accumulation points.
Central to our approach are two new general lower bounds on the subrank of tensors, which measures how much a tensor can be diagonalized.
Our rely on new lower bounds on the maximum rank in matrix subspaces that are obtained by slicing a three-tensor in the three different directions.
arXiv Detail & Related papers (2023-06-02T17:42:39Z) - Faster Robust Tensor Power Method for Arbitrary Order [15.090593955414137]
emphTensor power method (TPM) is one of the widely-used techniques in the decomposition of tensors.
We apply sketching method, and we are able to achieve the running time of $widetildeO(np-1)$ on the power $p$ and dimension $n$ tensor.
arXiv Detail & Related papers (2023-06-01T07:12:00Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Persistent Tensors and Multiqudit Entanglement Transformation [0.07252027234425334]
We present three specific families of persistent tensors, of which the lower bound is tight.
We show that there is a chain of degenerations between these three families of minimal-rank persistent tensors that can be used to study the entanglement between them.
arXiv Detail & Related papers (2022-11-01T18:00:00Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.