Primal-dual extrapolation methods for monotone inclusions under local
Lipschitz continuity with applications to variational inequality, conic
constrained saddle point, and convex conic optimization problems
- URL: http://arxiv.org/abs/2206.00973v1
- Date: Thu, 2 Jun 2022 10:31:45 GMT
- Title: Primal-dual extrapolation methods for monotone inclusions under local
Lipschitz continuity with applications to variational inequality, conic
constrained saddle point, and convex conic optimization problems
- Authors: Zhaosong Lu and Sanyou Mei
- Abstract summary: We consider a class of structured monotone inclusion (MI) problems that consist of finding a zero in the sum of two monotone operators.
We first propose a primal-dual extrapolation (PDE) method for solving a structured strongly MI problem by modifying the classical forward-backward splitting method.
We then propose another PDE method for solving a structured non-strongly MI problem by applying the above PDE method to approximately solve a sequence of structured strongly MI problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we consider a class of structured monotone inclusion (MI)
problems that consist of finding a zero in the sum of two monotone operators,
in which one is maximal monotone while another is locally Lipschitz continuous.
In particular, we first propose a primal-dual extrapolation (PDE) method for
solving a structured strongly MI problem by modifying the classical
forward-backward splitting method by using a point and operator extrapolation
technique, in which the parameters are adaptively updated by a backtracking
line search scheme. The proposed PDE method is almost parameter-free, equipped
with a verifiable termination criterion, and enjoys an operation complexity of
${\cal O}(\log \epsilon^{-1})$, measured by the amount of fundamental
operations consisting only of evaluations of one operator and resolvent of
another operator, for finding an $\epsilon$-residual solution of the structured
strongly MI problem. We then propose another PDE method for solving a
structured non-strongly MI problem by applying the above PDE method to
approximately solve a sequence of structured strongly MI problems. The
resulting PDE method is parameter-free, equipped with a verifiable termination
criterion, and enjoys an operation complexity of ${\cal O}(\epsilon^{-1}\log
\epsilon^{-1})$ for finding an $\epsilon$-residual solution of the structured
non-strongly MI problem. As a consequence, we apply the latter PDE method to
convex conic optimization, conic constrained saddle point, and variational
inequality problems, and obtain complexity results for finding an
$\epsilon$-KKT or $\epsilon$-residual solution of them under local Lipschitz
continuity. To the best of our knowledge, no prior studies were conducted to
investigate methods with complexity guarantees for solving the aforementioned
problems under local Lipschitz continuity. All the complexity results obtained
in this paper are entirely new.
Related papers
- Stochastic First-Order Methods with Non-smooth and Non-Euclidean Proximal Terms for Nonconvex High-Dimensional Stochastic Optimization [2.0657831823662574]
When the non problem is by which the non problem is by whichity, the sample of first-order methods may depend linearly on the problem dimension, is for undesirable problems.
Our algorithms allow for the estimate of complexity using the distance of.
mathO (log d) / EuM4.
We prove that DISFOM can sharpen variance employing $mathO (log d) / EuM4.
arXiv Detail & Related papers (2024-06-27T18:38:42Z) - Variance Reduced Halpern Iteration for Finite-Sum Monotone Inclusions [18.086061048484616]
We study finite-sum monotone inclusion problems, which model broad classes of equilibrium problems.
Our main contributions are variants of the classical Halpern iteration that employ variance reduction to obtain improved complexity guarantees.
We argue that, up to poly-logarithmic factors, this complexity is unimprovable in the monotone Lipschitz setting.
arXiv Detail & Related papers (2023-10-04T17:24:45Z) - Decentralized Weakly Convex Optimization Over the Stiefel Manifold [28.427697270742947]
We focus on the Stiefel manifold in the decentralized setting, where a connected network of agents in $nMn log-1)$ are tested.
We propose an method called the decentralized subgradient method (DRSM)$ for forcing a natural station below $nMn log-1)$.
arXiv Detail & Related papers (2023-03-31T02:56:23Z) - A Newton-CG based barrier-augmented Lagrangian method for general
nonconvex conic optimization [77.8485863487028]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Gradient-Free Methods for Deterministic and Stochastic Nonsmooth
Nonconvex Optimization [94.19177623349947]
Non-smooth non optimization problems emerge in machine learning and business making.
Two core challenges impede the development of efficient methods with finitetime convergence guarantee.
Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results.
arXiv Detail & Related papers (2022-09-12T06:53:24Z) - Randomized Coordinate Subgradient Method for Nonsmooth Composite
Optimization [11.017632675093628]
Coordinate-type subgradient methods for addressing nonsmooth problems are relatively underexplored due to the set of properties of the Lipschitz-type assumption.
arXiv Detail & Related papers (2022-06-30T02:17:11Z) - A Projection-free Algorithm for Constrained Stochastic Multi-level
Composition Optimization [12.096252285460814]
We propose a projection-free conditional gradient-type algorithm for composition optimization.
We show that the number of oracles and the linear-minimization oracle required by the proposed algorithm, are of order $mathcalO_T(epsilon-2)$ and $mathcalO_T(epsilon-3)$ respectively.
arXiv Detail & Related papers (2022-02-09T06:05:38Z) - Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
Complexity [109.54166127479093]
Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving machine learning problems.
In this paper, we propose a class faster faster zerothorder alternating gradient method multipliers (MMADMM) to solve the non finitesum problems.
We show that ZOMMAD methods can achieve a lower function $O(frac13nfrac1)$ for finding an $epsilon$-stationary point.
At the same time, we propose a class faster zerothorder online ADM methods (M
arXiv Detail & Related papers (2019-07-30T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.