Halpern-Type Accelerated and Splitting Algorithms For Monotone
Inclusions
- URL: http://arxiv.org/abs/2110.08150v1
- Date: Fri, 15 Oct 2021 15:26:32 GMT
- Title: Halpern-Type Accelerated and Splitting Algorithms For Monotone
Inclusions
- Authors: Quoc Tran-Dinh and Yang Luo
- Abstract summary: We develop a new type of accelerated algorithms to solve some classes of maximally monotone equations.
Our methods rely on a so-called Halpern-type fixed-point iteration in [32], and recently exploited by a number of researchers.
- Score: 14.445131747570962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we develop a new type of accelerated algorithms to solve some
classes of maximally monotone equations as well as monotone inclusions. Instead
of using Nesterov's accelerating approach, our methods rely on a so-called
Halpern-type fixed-point iteration in [32], and recently exploited by a number
of researchers, including [24, 70]. Firstly, we derive a new variant of the
anchored extra-gradient scheme in [70] based on Popov's past extra-gradient
method to solve a maximally monotone equation $G(x) = 0$. We show that our
method achieves the same $\mathcal{O}(1/k)$ convergence rate (up to a constant
factor) as in the anchored extra-gradient algorithm on the operator norm $\Vert
G(x_k)\Vert$, , but requires only one evaluation of $G$ at each iteration,
where $k$ is the iteration counter. Next, we develop two splitting algorithms
to approximate a zero point of the sum of two maximally monotone operators. The
first algorithm originates from the anchored extra-gradient method combining
with a splitting technique, while the second one is its Popov's variant which
can reduce the per-iteration complexity. Both algorithms appear to be new and
can be viewed as accelerated variants of the Douglas-Rachford (DR) splitting
method. They both achieve $\mathcal{O}(1/k)$ rates on the norm $\Vert
G_{\gamma}(x_k)\Vert$ of the forward-backward residual operator
$G_{\gamma}(\cdot)$ associated with the problem. We also propose a new
accelerated Douglas-Rachford splitting scheme for solving this problem which
achieves $\mathcal{O}(1/k)$ convergence rate on $\Vert G_{\gamma}(x_k)\Vert$
under only maximally monotone assumptions. Finally, we specify our first
algorithm to solve convex-concave minimax problems and apply our accelerated DR
scheme to derive a new variant of the alternating direction method of
multipliers (ADMM).
Related papers
- A Proximal Modified Quasi-Newton Method for Nonsmooth Regularized Optimization [0.7373617024876725]
Lipschitz-of-$nabla f$.
$mathcalS_k|p$.
$mathcalS_k|p$.
$nabla f$.
$mathcalS_k|p$.
arXiv Detail & Related papers (2024-09-28T18:16:32Z) - Inverting the Leverage Score Gradient: An Efficient Approximate Newton Method [10.742859956268655]
This paper aims to recover the intrinsic model parameters given the leverage scores gradient.
We specifically scrutinize the inversion of the leverage score gradient, denoted as $g(x)$.
arXiv Detail & Related papers (2024-08-21T01:39:42Z) - Accelerated Variance-Reduced Forward-Reflected Methods for Root-Finding Problems [8.0153031008486]
We propose a novel class of Nesterov's accelerated forward-reflected-based methods with variance reduction to solve root-finding problems.
Our algorithm is single-loop and leverages a new family of unbiased variance-reduced estimators specifically designed for root-finding problems.
arXiv Detail & Related papers (2024-06-04T15:23:29Z) - Efficient Frameworks for Generalized Low-Rank Matrix Bandit Problems [61.85150061213987]
We study the generalized low-rank matrix bandit problem, proposed in citelu2021low under the Generalized Linear Model (GLM) framework.
To overcome the computational infeasibility and theoretical restrain of existing algorithms, we first propose the G-ESTT framework.
We show that G-ESTT can achieve the $tildeO(sqrt(d_1+d_2)3/2Mr3/2T)$ bound of regret while G-ESTS can achineve the $tildeO
arXiv Detail & Related papers (2024-01-14T14:14:19Z) - Randomized Block-Coordinate Optimistic Gradient Algorithms for
Root-Finding Problems [8.0153031008486]
We develop two new algorithms to approximate a solution of nonlinear equations in large-scale settings.
We apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning.
arXiv Detail & Related papers (2023-01-08T21:46:27Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Alternatives to a nonhomogeneous partial differential equation quantum
algorithm [52.77024349608834]
We propose a quantum algorithm for solving nonhomogeneous linear partial differential equations of the form $Apsi(textbfr)=f(textbfr)$.
These achievements enable easier experimental implementation of the quantum algorithm based on nowadays technology.
arXiv Detail & Related papers (2022-05-11T14:29:39Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Practical and Parallelizable Algorithms for Non-Monotone Submodular
Maximization with Size Constraint [20.104148319012854]
We present and parallelizable for a submodular function, not necessarily a monotone, with respect to a size constraint.
We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and nearly optimal complexity query to $0.193 - varepsilon$.
arXiv Detail & Related papers (2020-09-03T22:43:55Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.