Low-Rank Mirror-Prox for Nonsmooth and Low-Rank Matrix Optimization
Problems
- URL: http://arxiv.org/abs/2206.11523v1
- Date: Thu, 23 Jun 2022 08:10:54 GMT
- Title: Low-Rank Mirror-Prox for Nonsmooth and Low-Rank Matrix Optimization
Problems
- Authors: Dan Garber, Atara Kaplan
- Abstract summary: Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning.
In this paper we consider standard convex relaxations for such problems.
We prove that under a textitstrict complementarity condition and under the relatively mild assumption that the nonsmooth objective can be written as a maximum of smooth functions, approximated variants of two popular textitmirror-prox methods.
- Score: 19.930021245647705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank and nonsmooth matrix optimization problems capture many fundamental
tasks in statistics and machine learning. While significant progress has been
made in recent years in developing efficient methods for \textit{smooth}
low-rank optimization problems that avoid maintaining high-rank matrices and
computing expensive high-rank SVDs, advances for nonsmooth problems have been
slow paced. In this paper we consider standard convex relaxations for such
problems. Mainly, we prove that under a \textit{strict complementarity}
condition and under the relatively mild assumption that the nonsmooth objective
can be written as a maximum of smooth functions, approximated variants of two
popular \textit{mirror-prox} methods: the Euclidean \textit{extragradient
method} and mirror-prox with \textit{matrix exponentiated gradient updates},
when initialized with a "warm-start", converge to an optimal solution with rate
$O(1/t)$, while requiring only two \textit{low-rank} SVDs per iteration.
Moreover, for the extragradient method we also consider relaxed versions of
strict complementarity which yield a trade-off between the rank of the SVDs
required and the radius of the ball in which we need to initialize the method.
We support our theoretical results with empirical experiments on several
nonsmooth low-rank matrix recovery tasks, demonstrating both the plausibility
of the strict complementarity assumption, and the efficient convergence of our
proposed low-rank mirror-prox variants.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Low-Rank Extragradient Methods for Scalable Semidefinite Optimization [0.0]
We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution.
We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method converges to a solution of the constrained optimization problem.
arXiv Detail & Related papers (2024-02-14T10:48:00Z) - Optimal Low-Rank Matrix Completion: Semidefinite Relaxations and
Eigenvector Disjunctions [6.537257913467247]
Low-rank matrix completion consists of a matrix of minimal complexity that recovers a given set of observations as accurately as possible.
New convex relaxations decrease the optimal by magnitude compared to existing methods.
arXiv Detail & Related papers (2023-05-20T22:04:34Z) - Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix
Optimization Problems [19.930021245647705]
Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning.
In this paper we consider standard convex relaxations for such problems.
We show that the textitextragradient method converges to an optimal solution with rate $O (1/t)$ while requiring only two textitlow-rank SVDs per iteration.
arXiv Detail & Related papers (2022-02-08T17:47:40Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv Detail & Related papers (2021-02-15T07:10:33Z) - On the Efficient Implementation of the Matrix Exponentiated Gradient
Algorithm for Low-Rank Matrix Optimization [26.858608065417663]
Convex optimization over the spectrahedron has important applications in machine learning, signal processing and statistics.
We propose efficient implementations of MEG, which are tailored for optimization with low-rank matrices, and only use a single low-rank SVD on each iteration.
We also provide efficiently-computable certificates for the correct convergence of our methods.
arXiv Detail & Related papers (2020-12-18T19:14:51Z) - Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank
Constraints [3.179831861897336]
We provide a framework for solving low-rank optimization problems to certifiable optimality.
Our framework also provides near-optimal solutions through rounding and local search techniques.
arXiv Detail & Related papers (2020-09-22T08:59:06Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.