Low-Tubal-Rank Tensor Recovery via Factorized Gradient Descent
- URL: http://arxiv.org/abs/2401.11940v2
- Date: Sat, 3 Feb 2024 02:47:42 GMT
- Title: Low-Tubal-Rank Tensor Recovery via Factorized Gradient Descent
- Authors: Zhiyu Liu, Zhi Han, Yandong Tang, Xi-Le Zhao, Yao Wang
- Abstract summary: We propose an efficient and effective low-tubal-rank tensor recovery method based on a factorization procedure akin to the Burer-Monteiro method.
We provide rigorous theoretical analysis to ensure the convergence of FGD under both noise-free and noisy situations.
Our approach exhibits superior performance in multiple scenarios, in terms of the faster computational speed and the smaller convergence error.
- Score: 22.801592340422157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the problem of recovering a tensor with an underlying
low-tubal-rank structure from a small number of corrupted linear measurements.
Traditional approaches tackling such a problem require the computation of
tensor Singular Value Decomposition (t-SVD), that is a computationally
intensive process, rendering them impractical for dealing with large-scale
tensors. Aim to address this challenge, we propose an efficient and effective
low-tubal-rank tensor recovery method based on a factorization procedure akin
to the Burer-Monteiro (BM) method. Precisely, our fundamental approach involves
decomposing a large tensor into two smaller factor tensors, followed by solving
the problem through factorized gradient descent (FGD). This strategy eliminates
the need for t-SVD computation, thereby reducing computational costs and
storage requirements. We provide rigorous theoretical analysis to ensure the
convergence of FGD under both noise-free and noisy situations. Additionally, it
is worth noting that our method does not require the precise estimation of the
tensor tubal-rank. Even in cases where the tubal-rank is slightly
overestimated, our approach continues to demonstrate robust performance. A
series of experiments have been carried out to demonstrate that, as compared to
other popular ones, our approach exhibits superior performance in multiple
scenarios, in terms of the faster computational speed and the smaller
convergence error.
Related papers
- Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.
We propose two algorithms to efficiently find solutions to constrained error bounds.
We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A Novel Tensor Factorization-Based Method with Robustness to Inaccurate
Rank Estimation [9.058215418134209]
We propose a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time.
It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation.
Based on this, the total cost at each iteration of the optimization algorithm is reduced to $mathcalO(n3log n +kn3)$ from $mathcalO(n4)$ achieved with standard methods.
arXiv Detail & Related papers (2023-05-19T06:26:18Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Robust Low-tubal-rank Tensor Completion based on Tensor Factorization
and Maximum Correntopy Criterion [12.02023514105999]
We propose a new objective function for low-tubal-rank tensor completion, which uses correntropy as the error measure to mitigate the effect of the outliers.
Numerical results using both synthetic and real data demonstrate the robust and superior performance of the proposed algorithms.
arXiv Detail & Related papers (2020-10-22T13:56:55Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.