Tensor Completion Made Practical
- URL: http://arxiv.org/abs/2006.03134v2
- Date: Sat, 25 Dec 2021 21:05:14 GMT
- Title: Tensor Completion Made Practical
- Authors: Allen Liu and Ankur Moitra
- Abstract summary: We introduce a new variant of alternating minimization, which in turn is inspired by understanding how the progress measures that guide convergence need to be adapted to the tensor setting.
We show that our algorithm converges linearly to the true tensors even when the factors are highly correlated and can be implemented in nearly linear time.
In contrast, and somewhat surprisingly, we show that the standard version of alternating minimization, without our new twist, can converge at a drastically slower rate in practice.
- Score: 19.229475414802213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor completion is a natural higher-order generalization of matrix
completion where the goal is to recover a low-rank tensor from sparse
observations of its entries. Existing algorithms are either heuristic without
provable guarantees, based on solving large semidefinite programs which are
impractical to run, or make strong assumptions such as requiring the factors to
be nearly orthogonal. In this paper we introduce a new variant of alternating
minimization, which in turn is inspired by understanding how the progress
measures that guide convergence of alternating minimization in the matrix
setting need to be adapted to the tensor setting. We show strong provable
guarantees, including showing that our algorithm converges linearly to the true
tensors even when the factors are highly correlated and can be implemented in
nearly linear time. Moreover our algorithm is also highly practical and we show
that we can complete third order tensors with a thousand dimensions from
observing a tiny fraction of its entries. In contrast, and somewhat
surprisingly, we show that the standard version of alternating minimization,
without our new twist, can converge at a drastically slower rate in practice.
Related papers
- Low-Rank Tensor Completion via Novel Sparsity-Inducing Regularizers [30.920908325825668]
To alleviate l1-norm in the low-rank tensor completion problem, non-rank surrogates/regularizers have been suggested.
These regularizers are applied to nuclear-rank restoration, and efficient algorithms based on the method of multipliers are proposed.
arXiv Detail & Related papers (2023-10-10T01:00:13Z) - The Inductive Bias of Flatness Regularization for Deep Matrix
Factorization [58.851514333119255]
This work takes the first step toward understanding the inductive bias of the minimum trace of the Hessian solutions in deep linear networks.
We show that for all depth greater than one, with the standard Isometry Property (RIP) on the measurements, minimizing the trace of Hessian is approximately equivalent to minimizing the Schatten 1-norm of the corresponding end-to-end matrix parameters.
arXiv Detail & Related papers (2023-06-22T23:14:57Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm
Regularization [44.54772242784423]
We develop an efficient non regularization algorithm for low-rank learning matrix.
The proposed algorithm can avoid expensive folding/unfolding problems.
Experiments show that the proposed algorithm is efficient and more space than the existing state-of-the-world.
arXiv Detail & Related papers (2022-05-06T07:47:10Z) - Robust M-estimation-based Tensor Ring Completion: a Half-quadratic
Minimization Approach [14.048989759890475]
We develop a robust approach to tensor ring completion that uses an M-estimator as its error statistic.
We present two HQ-based algorithms based on truncated singular value decomposition and matrix factorization.
arXiv Detail & Related papers (2021-06-19T04:37:50Z) - Robust Low-tubal-rank Tensor Completion based on Tensor Factorization
and Maximum Correntopy Criterion [12.02023514105999]
We propose a new objective function for low-tubal-rank tensor completion, which uses correntropy as the error measure to mitigate the effect of the outliers.
Numerical results using both synthetic and real data demonstrate the robust and superior performance of the proposed algorithms.
arXiv Detail & Related papers (2020-10-22T13:56:55Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion [1.3406858660972554]
We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
arXiv Detail & Related papers (2020-05-28T08:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.