New Riemannian preconditioned algorithms for tensor completion via
polyadic decomposition
- URL: http://arxiv.org/abs/2101.11108v1
- Date: Tue, 26 Jan 2021 22:11:06 GMT
- Title: New Riemannian preconditioned algorithms for tensor completion via
polyadic decomposition
- Authors: Shuyu Dong, Bin Gao, Yu Guan, Fran\c{c}ois Glineur
- Abstract summary: These algorithms exploit a non-Euclidean metric on the product space of the factor matrices of the low-rank tensor in the polyadic decomposition form.
We prove that the proposed gradient descent algorithm globally converges to a stationary point of the tensor completion problem.
Numerical results on synthetic and real-world data suggest that the proposed algorithms are more efficient in memory and time compared to state-of-the-art algorithms.
- Score: 10.620193291237262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose new Riemannian preconditioned algorithms for low-rank tensor
completion via the polyadic decomposition of a tensor. These algorithms exploit
a non-Euclidean metric on the product space of the factor matrices of the
low-rank tensor in the polyadic decomposition form. This new metric is designed
using an approximation of the diagonal blocks of the Hessian of the tensor
completion cost function, thus has a preconditioning effect on these
algorithms. We prove that the proposed Riemannian gradient descent algorithm
globally converges to a stationary point of the tensor completion problem, with
convergence rate estimates using the $\L{}$ojasiewicz property. Numerical
results on synthetic and real-world data suggest that the proposed algorithms
are more efficient in memory and time compared to state-of-the-art algorithms.
Moreover, the proposed algorithms display a greater tolerance for overestimated
rank parameters in terms of the tensor recovery performance, thus enable a
flexible choice of the rank parameter.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Online estimation of the inverse of the Hessian for stochastic optimization with application to universal stochastic Newton algorithms [4.389938747401259]
This paper addresses second-order optimization for estimating the minimizer of a convex function written as an expectation.
A direct recursive estimation technique for the inverse Hessian matrix using a Robbins-Monro procedure is introduced.
Above all, it allows to develop universal Newton methods and investigate the efficiency of the proposed approach.
arXiv Detail & Related papers (2024-01-15T13:58:30Z) - ADMM-MM Algorithm for General Tensor Decomposition [7.0326155922512275]
The proposed algorithm supports three basic loss functions ($ell$-loss, $ell$-loss and KL divergence) and various low-rank tensor decomposition models (CP, Tucker, TT, and TR decompositions)
We show that wide-range applications can be solved by the proposed algorithm, and can be easily extended to any established tensor decomposition models in a plug-and-play manner.
arXiv Detail & Related papers (2023-12-19T00:17:34Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Zeroth-order Riemannian Averaging Stochastic Approximation Algorithms [19.99781875916751]
We show that textttZo-RASA achieves optimal sample complexities for generating $epsilon$-approximation first-order stationary solutions.
We improve the algorithm's practicality by using geometrics and vector transport, instead of exponential mappings and parallel transports.
arXiv Detail & Related papers (2023-09-25T20:13:36Z) - Robust empirical risk minimization via Newton's method [9.797319790710711]
A new variant of Newton's method for empirical risk minimization is studied.
The gradient and Hessian of the objective function are replaced by robust estimators.
An algorithm for obtaining robust Newton directions based on the conjugate gradient method is also proposed.
arXiv Detail & Related papers (2023-01-30T18:54:54Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.