On the Accuracy of Hotelling-Type Tensor Deflation: A Random Tensor
Analysis
- URL: http://arxiv.org/abs/2211.09004v1
- Date: Wed, 16 Nov 2022 16:01:56 GMT
- Title: On the Accuracy of Hotelling-Type Tensor Deflation: A Random Tensor
Analysis
- Authors: Mohamed El Amine Seddik, Maxime Guillaud, Alexis Decurninge
- Abstract summary: A rank-$r$ asymmetric spiked model of the form $sum_i=1r beta_i A_i + W$ is considered.
We provide a study of Hotelling-type deflation in the large dimensional regime.
- Score: 16.28927188636617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leveraging on recent advances in random tensor theory, we consider in this
paper a rank-$r$ asymmetric spiked tensor model of the form $\sum_{i=1}^r
\beta_i A_i + W$ where $\beta_i\geq 0$ and the $A_i$'s are rank-one tensors
such that $\langle A_i, A_j \rangle\in [0, 1]$ for $i\neq j$, based on which we
provide an asymptotic study of Hotelling-type tensor deflation in the large
dimensional regime. Specifically, our analysis characterizes the singular
values and alignments at each step of the deflation procedure, for
asymptotically large tensor dimensions. This can be used to construct
consistent estimators of different quantities involved in the underlying
problem, such as the signal-to-noise ratios $\beta_i$ or the alignments between
the different signal components $\langle A_i, A_j \rangle$.
Related papers
- Overcomplete Tensor Decomposition via Koszul-Young Flattenings [63.01248796170617]
We give a new algorithm for decomposing an $n_times n times n_3$ tensor as the sum of a minimal number of rank-1 terms.
We show that an even more general class of degree-$d$s cannot surpass rank $Cn$ for a constant $C = C(d)$.
arXiv Detail & Related papers (2024-11-21T17:41:09Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Correlated volumes for extended wavefunctions on a random-regular graph [0.0]
We analyze the ergodic properties of a metallic wavefunction for the Anderson model in a disordered random-regular graph with branching number $k=2.
We extract their corresponding fractal dimensions $D_q$ in the thermodynamic limit together with correlated volumes $N_q$ that control finite-size effects.
arXiv Detail & Related papers (2023-11-13T19:15:18Z) - On the Accuracy of Hotelling-Type Asymmetric Tensor Deflation: A Random
Tensor Analysis [14.809070996761013]
Hotelling-type tensor deflation is studied in the presence of noise.
We analytically characterize the estimated singular values and the alignment of estimated and true singular vectors at each step of the deflation procedure.
arXiv Detail & Related papers (2023-10-28T14:40:10Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Exact nuclear norm, completion and decomposition for random overcomplete
tensors via degree-4 SOS [0.7233897166339269]
We show that simple semidefinite programs inspired by degree $4$ SOS can exactly solve the tensor nuclear norm, tensor decomposition, and tensor completion problems on tensors with random asymmetric components.
We show that w.h.p. these semidefinite programs can exactly find the nuclear norm and components of an $(ntimes ntimes n)$-tensor $mathcalT$ with $mleq n3/2/polylog(n)$ random asymmetric components.
arXiv Detail & Related papers (2020-11-18T17:27:36Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - Tensor Estimation with Nearly Linear Samples Given Weak Side Information [5.69361786082969]
We show that weak side information is sufficient to reduce the sample to $O(n)$.
We provide an algorithm that utilizes this side information to produce a consistent estimator with $O(n1+kappa)$ samples for any small constant $kappa > 0$.
arXiv Detail & Related papers (2020-07-01T20:28:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.