Provable Tensor-Train Format Tensor Completion by Riemannian
Optimization
- URL: http://arxiv.org/abs/2108.12163v1
- Date: Fri, 27 Aug 2021 08:13:58 GMT
- Title: Provable Tensor-Train Format Tensor Completion by Riemannian
Optimization
- Authors: Jian-Feng Cai, Jingyang Li, Dong Xia
- Abstract summary: We provide the first theoretical guarantees of the convergence of RGrad algorithm for TT-format tensor completion.
We also propose a novel approach, referred to as the sequential second-order moment method.
- Score: 22.166436026482984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The tensor train (TT) format enjoys appealing advantages in handling
structural high-order tensors. The recent decade has witnessed the wide
applications of TT-format tensors from diverse disciplines, among which tensor
completion has drawn considerable attention. Numerous fast algorithms,
including the Riemannian gradient descent (RGrad) algorithm, have been proposed
for the TT-format tensor completion. However, the theoretical guarantees of
these algorithms are largely missing or sub-optimal, partly due to the
complicated and recursive algebraic operations in TT-format decomposition.
Moreover, existing results established for the tensors of other formats, for
example, Tucker and CP, are inapplicable because the algorithms treating
TT-format tensors are substantially different and more involved. In this paper,
we provide, to our best knowledge, the first theoretical guarantees of the
convergence of RGrad algorithm for TT-format tensor completion, under a nearly
optimal sample size condition. The RGrad algorithm converges linearly with a
constant contraction rate that is free of tensor condition number without the
necessity of re-conditioning. We also propose a novel approach, referred to as
the sequential second-order moment method, to attain a warm initialization
under a similar sample size requirement. As a byproduct, our result even
significantly refines the prior investigation of RGrad algorithm for matrix
completion. Numerical experiments confirm our theoretical discovery and
showcase the computational speedup gained by the TT-format decomposition.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.
We propose two algorithms to efficiently find solutions to constrained error bounds.
We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Robust M-estimation-based Tensor Ring Completion: a Half-quadratic
Minimization Approach [14.048989759890475]
We develop a robust approach to tensor ring completion that uses an M-estimator as its error statistic.
We present two HQ-based algorithms based on truncated singular value decomposition and matrix factorization.
arXiv Detail & Related papers (2021-06-19T04:37:50Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - New Riemannian preconditioned algorithms for tensor completion via
polyadic decomposition [10.620193291237262]
These algorithms exploit a non-Euclidean metric on the product space of the factor matrices of the low-rank tensor in the polyadic decomposition form.
We prove that the proposed gradient descent algorithm globally converges to a stationary point of the tensor completion problem.
Numerical results on synthetic and real-world data suggest that the proposed algorithms are more efficient in memory and time compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2021-01-26T22:11:06Z) - Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration [10.034394572576922]
We propose a new algorithm to estimate the low tensor-train rank structure from the noisy high-order tensor observation.
The merits of the proposed TTOI are illustrated through applications to estimation and dimension reduction of high-order Markov processes.
arXiv Detail & Related papers (2020-10-06T05:18:24Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Geometric All-Way Boolean Tensor Decomposition [14.065968221500246]
We present GETF, which sequentially identifies the rank-1 basis for a tensor from a geometric perspective.
Experiments on both synthetic and real-world data demonstrated that GETF has significantly improved performance in reconstruction accuracy, extraction of latent structures and it is an order of magnitude faster than other state-of-the-art methods.
arXiv Detail & Related papers (2020-07-31T03:29:44Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.