Accelerated Variance-Reduced Forward-Reflected Methods for Root-Finding Problems
- URL: http://arxiv.org/abs/2406.02413v1
- Date: Tue, 4 Jun 2024 15:23:29 GMT
- Title: Accelerated Variance-Reduced Forward-Reflected Methods for Root-Finding Problems
- Authors: Quoc Tran-Dinh,
- Abstract summary: We propose a novel class of Nesterov's accelerated forward-reflected-based methods with variance reduction to solve root-finding problems.
Our algorithm is single-loop and leverages a new family of unbiased variance-reduced estimators specifically designed for root-finding problems.
- Score: 8.0153031008486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel class of Nesterov's stochastic accelerated forward-reflected-based methods with variance reduction to solve root-finding problems under $\frac{1}{L}$-co-coerciveness. Our algorithm is single-loop and leverages a new family of unbiased variance-reduced estimators specifically designed for root-finding problems. It achieves both $\mathcal{O}(L^2/k^2)$ and $o(1/k^2)$-last-iterate convergence rates in terms of expected operator squared norm, where $k$ denotes the iteration counter. We instantiate our framework for two prominent estimators: SVRG and SAGA. By an appropriate choice of parameters, both variants attain an oracle complexity of $\mathcal{O}( n + Ln^{2/3}\epsilon^{-1})$ to reach an $\epsilon$-solution, where $n$ represents the number of summands in the finite-sum operator. Furthermore, under $\mu$-strong quasi-monotonicity, our method achieves a linear convergence rate and an oracle complexity of $\mathcal{O}(n+ \kappa n^{2/3}\log(\epsilon^{-1}))$, where $\kappa := \frac{L}{\mu}$. We extend our approach to solve a class of finite-sum monotone inclusions, demonstrating that our schemes retain the same theoretical guarantees as in the equation setting. Finally, numerical experiments validate our algorithms and demonstrate their promising performance compared to state-of-the-art methods.
Related papers
- Second-Order Min-Max Optimization with Lazy Hessians [17.17389531402505]
This paper studies second-order methods for convex-concave minimax optimization.
We show that the computation cost can be reduced by Hessian across iterations.
arXiv Detail & Related papers (2024-10-12T15:30:17Z) - Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - A Proximal Modified Quasi-Newton Method for Nonsmooth Regularized Optimization [0.7373617024876725]
Lipschitz-of-$nabla f$.
$mathcalS_k|p$.
$mathcalS_k|p$.
$nabla f$.
$mathcalS_k|p$.
arXiv Detail & Related papers (2024-09-28T18:16:32Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Stochastic Distributed Optimization under Average Second-order
Similarity: Algorithms and Analysis [36.646876613637325]
We study finite-sum distributed optimization problems involving a master node and $n-1$ local nodes.
We propose two new algorithms, SVRS and AccSVRS, motivated by previous works.
arXiv Detail & Related papers (2023-04-15T08:18:47Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Randomized Block-Coordinate Optimistic Gradient Algorithms for
Root-Finding Problems [8.0153031008486]
We develop two new algorithms to approximate a solution of nonlinear equations in large-scale settings.
We apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning.
arXiv Detail & Related papers (2023-01-08T21:46:27Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Perseus: A Simple and Optimal High-Order Method for Variational
Inequalities [81.32967242727152]
A VI involves finding $xstar in mathcalX$ such that $langle F(x), x - xstarrangle geq 0$ for all $x in mathcalX$.
We propose a $pth$-order method that does textitnot require any line search procedure and provably converges to a weak solution at a rate of $O(epsilon-2/(p+1))$.
arXiv Detail & Related papers (2022-05-06T13:29:14Z) - Halpern-Type Accelerated and Splitting Algorithms For Monotone
Inclusions [14.445131747570962]
We develop a new type of accelerated algorithms to solve some classes of maximally monotone equations.
Our methods rely on a so-called Halpern-type fixed-point iteration in [32], and recently exploited by a number of researchers.
arXiv Detail & Related papers (2021-10-15T15:26:32Z) - Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
Complexity [109.54166127479093]
Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving machine learning problems.
In this paper, we propose a class faster faster zerothorder alternating gradient method multipliers (MMADMM) to solve the non finitesum problems.
We show that ZOMMAD methods can achieve a lower function $O(frac13nfrac1)$ for finding an $epsilon$-stationary point.
At the same time, we propose a class faster zerothorder online ADM methods (M
arXiv Detail & Related papers (2019-07-30T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.