Landscape analysis of an improved power method for tensor decomposition
- URL: http://arxiv.org/abs/2110.15821v1
- Date: Fri, 29 Oct 2021 14:32:54 GMT
- Title: Landscape analysis of an improved power method for tensor decomposition
- Authors: Joe Kileel, Timo Klock, Jo\~ao M. Pereira
- Abstract summary: We consider the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method.
We obtain a near-global guarantee up to $tildeo(Dlfloor m r)$, and a global guarantee up to rank random tensor component.
- Score: 2.363388546004777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we consider the optimization formulation for symmetric tensor
decomposition recently introduced in the Subspace Power Method (SPM) of Kileel
and Pereira. Unlike popular alternative functionals for tensor decomposition,
the SPM objective function has the desirable properties that its maximal value
is known in advance, and its global optima are exactly the rank-1 components of
the tensor when the input is sufficiently low-rank. We analyze the non-convex
optimization landscape associated with the SPM objective. Our analysis accounts
for working with noisy tensors. We derive quantitative bounds such that any
second-order critical point with SPM objective value exceeding the bound must
equal a tensor component in the noiseless case, and must approximate a tensor
component in the noisy case. For decomposing tensors of size $D^{\times m}$, we
obtain a near-global guarantee up to rank $\widetilde{o}(D^{\lfloor m/2
\rfloor})$ under a random tensor model, and a global guarantee up to rank
$\mathcal{O}(D)$ assuming deterministic frame conditions. This implies that SPM
with suitable initialization is a provable, efficient, robust algorithm for
low-rank symmetric tensor decomposition. We conclude with numerics that show a
practical preferability for using the SPM functional over a more established
counterpart.
Related papers
- A Novel Tensor Factorization-Based Method with Robustness to Inaccurate
Rank Estimation [9.058215418134209]
We propose a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time.
It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation.
Based on this, the total cost at each iteration of the optimization algorithm is reduced to $mathcalO(n3log n +kn3)$ from $mathcalO(n4)$ achieved with standard methods.
arXiv Detail & Related papers (2023-05-19T06:26:18Z) - Tensor Recovery Based on Tensor Equivalent Minimax-Concave Penalty [3.0711362702464675]
It is an important problem in computer and machine learning.
We propose two adaptive models for two tensor recovery problems.
The proposed method is superior to state-of-arts experiments.
arXiv Detail & Related papers (2022-01-30T03:28:01Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Robust M-estimation-based Tensor Ring Completion: a Half-quadratic
Minimization Approach [14.048989759890475]
We develop a robust approach to tensor ring completion that uses an M-estimator as its error statistic.
We present two HQ-based algorithms based on truncated singular value decomposition and matrix factorization.
arXiv Detail & Related papers (2021-06-19T04:37:50Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - Robust Tensor Principal Component Analysis: Exact Recovery via
Deterministic Model [5.414544833902815]
This paper proposes a new method to analyze Robust tensor principal component analysis (RTPCA)
It is based on the recently developed tensor-tensor product and tensor singular value decomposition (t-SVD)
arXiv Detail & Related papers (2020-08-05T16:26:10Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z) - Tensor completion via nonconvex tensor ring rank minimization with
guaranteed convergence [16.11872681638052]
In recent studies, the tensor ring (TR) rank has shown high effectiveness in tensor completion.
A recently proposed TR rank is based on capturing the structure within the weighted sum penalizing the singular value equally.
In this paper, we propose to use the logdet-based function as a non smooth relaxation for solutions practice.
arXiv Detail & Related papers (2020-05-14T03:13:17Z) - Spectral density estimation with the Gaussian Integral Transform [91.3755431537592]
spectral density operator $hatrho(omega)=delta(omega-hatH)$ plays a central role in linear response theory.
We describe a near optimal quantum algorithm providing an approximation to the spectral density.
arXiv Detail & Related papers (2020-04-10T03:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.