Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements
- URL: http://arxiv.org/abs/2104.14526v1
- Date: Thu, 29 Apr 2021 17:44:49 GMT
- Title: Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements
- Authors: Tian Tong, Cong Ma, Ashley Prater-Bennette, Erin Tripp, Yuejie Chi
- Abstract summary: A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
- Score: 30.395874385570007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensors, which provide a powerful and flexible model for representing
multi-attribute data and multi-way interactions, play an indispensable role in
modern data science across various fields in science and engineering. A
fundamental task is to faithfully recover the tensor from highly incomplete
measurements in a statistically and computationally efficient manner.
Harnessing the low-rank structure of tensors in the Tucker decomposition, this
paper develops a scaled gradient descent (ScaledGD) algorithm to directly
recover the tensor factors with tailored spectral initializations, and shows
that it provably converges at a linear rate independent of the condition number
of the ground truth tensor for two canonical problems -- tensor completion and
tensor regression -- as soon as the sample size is above the order of $n^{3/2}$
ignoring other dependencies, where $n$ is the dimension of the tensor. This
leads to an extremely scalable approach to low-rank tensor estimation compared
with prior art, which suffers from at least one of the following drawbacks:
extreme sensitivity to ill-conditioning, high per-iteration costs in terms of
memory and computation, or poor sample complexity guarantees. To the best of
our knowledge, ScaledGD is the first algorithm that achieves near-optimal
statistical and computational complexities simultaneously for low-rank tensor
completion with the Tucker decomposition. Our algorithm highlights the power of
appropriate preconditioning in accelerating nonconvex statistical estimation,
where the iteration-varying preconditioners promote desirable invariance
properties of the trajectory with respect to the underlying symmetry in
low-rank tensor factorization.
Related papers
- Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.
We propose two algorithms to efficiently find solutions to constrained error bounds.
We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Low-Tubal-Rank Tensor Recovery via Factorized Gradient Descent [22.801592340422157]
We propose an efficient and effective low-tubal-rank tensor recovery method based on a factorization procedure akin to the Burer-Monteiro method.
We provide rigorous theoretical analysis to ensure the convergence of FGD under both noise-free and noisy situations.
Our approach exhibits superior performance in multiple scenarios, in terms of the faster computational speed and the smaller convergence error.
arXiv Detail & Related papers (2024-01-22T13:30:11Z) - Provably Accelerating Ill-Conditioned Low-rank Estimation via Scaled
Gradient Descent, Even with Overparameterization [48.65416821017865]
This chapter introduces a new algorithmic approach, dubbed scaled gradient (ScaledGD)
It converges linearly at a constant rate independent of the condition number of the low-rank object.
It maintains the low periteration cost of gradient descent for a variety of tasks.
arXiv Detail & Related papers (2023-10-09T21:16:57Z) - A Novel Tensor Factorization-Based Method with Robustness to Inaccurate
Rank Estimation [9.058215418134209]
We propose a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time.
It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation.
Based on this, the total cost at each iteration of the optimization algorithm is reduced to $mathcalO(n3log n +kn3)$ from $mathcalO(n4)$ achieved with standard methods.
arXiv Detail & Related papers (2023-05-19T06:26:18Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion [3.498620439731324]
We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
arXiv Detail & Related papers (2020-10-01T12:45:39Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.