First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces
- URL: http://arxiv.org/abs/2206.02041v1
- Date: Sat, 4 Jun 2022 18:53:44 GMT
- Title: First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces
- Authors: Michael I. Jordan, Tianyi Lin, Emmanouil-Vasileios
Vlatakis-Gkaragkounis
- Abstract summary: min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
- Score: 93.35384756718868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: From optimal transport to robust dimensionality reduction, a plethora of
machine learning applications can be cast into the min-max optimization
problems over Riemannian manifolds. Though many min-max algorithms have been
analyzed in the Euclidean setting, it has proved elusive to translate these
results to the Riemannian case. Zhang et al. [2022] have recently shown that
geodesic convex concave Riemannian problems always admit saddle-point
solutions. Inspired by this result, we study whether a performance gap between
Riemannian and optimal Euclidean space convex-concave algorithms is necessary.
We answer this question in the negative-we prove that the Riemannian corrected
extragradient (RCEG) method achieves last-iterate convergence at a linear rate
in the geodesically strongly-convex-concave case, matching the Euclidean
result. Our results also extend to the stochastic or non-smooth case where RCEG
and Riemanian gradient ascent descent (RGDA) achieve near-optimal convergence
rates up to factors depending on curvature of the manifold.
Related papers
- Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms [18.425648833592312]
We show that tBMM converges to an $ilon$-stationary point within $O(epsilon-2)$.
Under a mild iteration, the results still hold when the subproblem is solved in iterations in each provided the total optimality gap is bounded.
arXiv Detail & Related papers (2024-05-05T22:53:14Z) - Streamlining in the Riemannian Realm: Efficient Riemannian Optimization
with Loopless Variance Reduction [4.578425862931332]
This study focuses on the crucial reduction mechanism used in both Euclidean and Riemannian settings.
Motivated by Euclidean methods, we introduce R-based methods to replace the outer loop with computation triggered by a coin flip.
Using R- as a framework, we demonstrate its applicability to various important settings.
arXiv Detail & Related papers (2024-03-11T12:49:37Z) - Convergence and complexity of block majorization-minimization for constrained block-Riemannian optimization [18.425648833592312]
Blockization-minimization (BMM) is a simple iterative gradient for non-exhaustive subspace estimation.
Our analysis makes explicit use of Euclidian constraints.
arXiv Detail & Related papers (2023-12-16T05:40:19Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - On a class of geodesically convex optimization problems solved via
Euclidean MM methods [50.428784381385164]
We show how a difference of Euclidean convexization functions can be written as a difference of different types of problems in statistics and machine learning.
Ultimately, we helps the broader broader the broader the broader the broader the work.
arXiv Detail & Related papers (2022-06-22T23:57:40Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Stochastic Zeroth-order Riemannian Derivative Estimation and
Optimization [15.78743548731191]
We propose an oracle version of the Gaussian smoothing function to overcome the difficulty of non-linearity of manifold non-linearity.
We demonstrate the applicability of our algorithms by results and real-world applications on black-box stiffness control for robotics and black-box attacks to neural networks.
arXiv Detail & Related papers (2020-03-25T06:58:19Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max
Optimization [61.66927778694731]
Epoch gradient descent method (a.k.a. Epoch-GD) proposed by Hazan and Kale (2011) was deemed a breakthrough for strongly convex minimization.
Our result is the first one that shows Epoch-GDA can achieve the optimal rate of $O (1/T)$ for the duality gap of SCSC min-max problems.
arXiv Detail & Related papers (2020-02-13T02:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.