Extragradient-Type Methods with $\mathcal{O} (1/k)$ Last-Iterate
Convergence Rates for Co-Hypomonotone Inclusions
- URL: http://arxiv.org/abs/2302.04099v2
- Date: Sat, 14 Oct 2023 14:57:10 GMT
- Title: Extragradient-Type Methods with $\mathcal{O} (1/k)$ Last-Iterate
Convergence Rates for Co-Hypomonotone Inclusions
- Authors: Quoc Tran-Dinh
- Abstract summary: We develop two "Nesterov's accelerated" variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion.
Our results can be viewed as alternatives of a recent class of Halpern-type methods for root-finding problems.
- Score: 8.0153031008486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop two "Nesterov's accelerated" variants of the well-known
extragradient method to approximate a solution of a co-hypomonotone inclusion
constituted by the sum of two operators, where one is Lipschitz continuous and
the other is possibly multivalued. The first scheme can be viewed as an
accelerated variant of Tseng's forward-backward-forward splitting (FBFS)
method, while the second one is a Nesterov's accelerated variant of the "past"
FBFS scheme, which requires only one evaluation of the Lipschitz operator and
one resolvent of the multivalued mapping. Under appropriate conditions on the
parameters, we theoretically prove that both algorithms achieve
$\mathcal{O}(1/k)$ last-iterate convergence rates on the residual norm, where
$k$ is the iteration counter. Our results can be viewed as alternatives of a
recent class of Halpern-type methods for root-finding problems. For comparison,
we also provide a new convergence analysis of the two recent extra-anchored
gradient-type methods for solving co-hypomonotone inclusions.
Related papers
- Accelerated Variance-Reduced Forward-Reflected Methods for Root-Finding Problems [8.0153031008486]
We propose a novel class of Nesterov's accelerated forward-reflected-based methods with variance reduction to solve root-finding problems.
Our algorithm is single-loop and leverages a new family of unbiased variance-reduced estimators specifically designed for root-finding problems.
arXiv Detail & Related papers (2024-06-04T15:23:29Z) - A Unified Analysis for the Subgradient Methods Minimizing Composite
Nonconvex, Nonsmooth and Non-Lipschitz Functions [8.960341489080609]
We present a novel convergence analysis in context of non-Lipschitz and nonsmooth optimization problems.
Under any of the subgradient upper bounding conditions to be introduced in the paper, we show that $O (1/stqrT)$ holds in terms of the square gradient of the envelope function, which further improves to be $O (1/T)$ if, in addition, the uniform KL condition with $1/2$ exponents holds.
arXiv Detail & Related papers (2023-08-30T23:34:11Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - A Primal-Dual Approach to Solving Variational Inequalities with General Constraints [54.62996442406718]
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities.
We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
arXiv Detail & Related papers (2022-10-27T17:59:09Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Gradient-Free Methods for Deterministic and Stochastic Nonsmooth
Nonconvex Optimization [94.19177623349947]
Non-smooth non optimization problems emerge in machine learning and business making.
Two core challenges impede the development of efficient methods with finitetime convergence guarantee.
Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results.
arXiv Detail & Related papers (2022-09-12T06:53:24Z) - Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization [116.89941263390769]
We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $min_mathbfxmax_mathbfyF(mathbfx) + H(mathbfx,mathbfy)$, where one has access to first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.
We present a emphaccelerated gradient-extragradient (AG-EG) descent-ascent algorithm that combines extragrad
arXiv Detail & Related papers (2022-06-17T06:10:20Z) - A Stochastic Proximal Method for Nonsmooth Regularized Finite Sum
Optimization [7.014966911550542]
We consider the problem of training a deep neural network with nonsmooth regularization to retrieve a sparse sub-structure.
We derive a new solver, called SR2, whose convergence and worst-case complexity are established without knowledge or approximation of the gradient's Lipschitz constant.
Experiments on network instances trained on CIFAR-10 and CIFAR-100 show that SR2 consistently achieves higher sparsity and accuracy than related methods such as ProxGEN and ProxSGD.
arXiv Detail & Related papers (2022-06-14T00:28:44Z) - Halpern-Type Accelerated and Splitting Algorithms For Monotone
Inclusions [14.445131747570962]
We develop a new type of accelerated algorithms to solve some classes of maximally monotone equations.
Our methods rely on a so-called Halpern-type fixed-point iteration in [32], and recently exploited by a number of researchers.
arXiv Detail & Related papers (2021-10-15T15:26:32Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - A New Randomized Primal-Dual Algorithm for Convex Optimization with
Optimal Last Iterate Rates [16.54912614895861]
We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems.
We prove that our algorithm achieves optimal convergence rates in two cases: general convexity and strong convexity.
Our results show that the proposed method has encouraging performance on different experiments.
arXiv Detail & Related papers (2020-03-03T03:59:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.