On the Accuracy of Hotelling-Type Asymmetric Tensor Deflation: A Random
Tensor Analysis
- URL: http://arxiv.org/abs/2310.18717v1
- Date: Sat, 28 Oct 2023 14:40:10 GMT
- Title: On the Accuracy of Hotelling-Type Asymmetric Tensor Deflation: A Random
Tensor Analysis
- Authors: Mohamed El Amine Seddik, Maxime Guillaud, Alexis Decurninge, Jos\'e
Henrique de Morais Goulart
- Abstract summary: Hotelling-type tensor deflation is studied in the presence of noise.
We analytically characterize the estimated singular values and the alignment of estimated and true singular vectors at each step of the deflation procedure.
- Score: 14.809070996761013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces an asymptotic study of Hotelling-type tensor deflation
in the presence of noise, in the regime of large tensor dimensions.
Specifically, we consider a low-rank asymmetric tensor model of the form
$\sum_{i=1}^r \beta_i{\mathcal{A}}_i + {\mathcal{W}}$ where $\beta_i\geq 0$ and
the ${\mathcal{A}}_i$'s are unit-norm rank-one tensors such that $\left|
\langle {\mathcal{A}}_i, {\mathcal{A}}_j \rangle \right| \in [0, 1]$ for $i\neq
j$ and ${\mathcal{W}}$ is an additive noise term. Assuming that the dominant
components are successively estimated from the noisy observation and
subsequently subtracted, we leverage recent advances in random tensor theory in
the regime of asymptotically large tensor dimensions to analytically
characterize the estimated singular values and the alignment of estimated and
true singular vectors at each step of the deflation procedure. Furthermore,
this result can be used to construct estimators of the signal-to-noise ratios
$\beta_i$ and the alignments between the estimated and true rank-1 signal
components.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - On the Accuracy of Hotelling-Type Tensor Deflation: A Random Tensor
Analysis [16.28927188636617]
A rank-$r$ asymmetric spiked model of the form $sum_i=1r beta_i A_i + W$ is considered.
We provide a study of Hotelling-type deflation in the large dimensional regime.
arXiv Detail & Related papers (2022-11-16T16:01:56Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Landscape analysis of an improved power method for tensor decomposition [2.363388546004777]
We consider the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method.
We obtain a near-global guarantee up to $tildeo(Dlfloor m r)$, and a global guarantee up to rank random tensor component.
arXiv Detail & Related papers (2021-10-29T14:32:54Z) - Asymptotic Tensor Powers of Banach Spaces [77.34726150561087]
We show that Euclidean spaces are characterized by the property that their tensor radius equals their dimension.
We also show that the tensor radius of an operator whose domain or range is Euclidean is equal to its nuclear norm.
arXiv Detail & Related papers (2021-10-25T11:51:12Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z) - Exact nuclear norm, completion and decomposition for random overcomplete
tensors via degree-4 SOS [0.7233897166339269]
We show that simple semidefinite programs inspired by degree $4$ SOS can exactly solve the tensor nuclear norm, tensor decomposition, and tensor completion problems on tensors with random asymmetric components.
We show that w.h.p. these semidefinite programs can exactly find the nuclear norm and components of an $(ntimes ntimes n)$-tensor $mathcalT$ with $mleq n3/2/polylog(n)$ random asymmetric components.
arXiv Detail & Related papers (2020-11-18T17:27:36Z) - A Sharp Blockwise Tensor Perturbation Bound for Orthogonal Iteration [23.308822706415867]
We establish blockwise tensor perturbation bounds for the high-order orthogonal iteration (HOOI)
We show the upper bounds of mode-$k$ singular subspace estimation are unilateral and converge linearly to a quantity characterized by blockwise errors of the perturbation and signal strength.
We prove that one-step HOOI with only a single iteration is also optimal in terms of tensor reconstruction and can be used to lower the computational cost.
arXiv Detail & Related papers (2020-08-06T03:01:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.