DIPPA: An improved Method for Bilinear Saddle Point Problems
- URL: http://arxiv.org/abs/2103.08270v1
- Date: Mon, 15 Mar 2021 10:55:30 GMT
- Title: DIPPA: An improved Method for Bilinear Saddle Point Problems
- Authors: Guangzeng Xie, Yuze Han, Zhihua Zhang
- Abstract summary: This paper deals with point dependency problems $min_bfx max_bfy g(fracx) + bfxtop bfbftop fracbfA kappa_x kappa_x (kappa_x + kappa_y) kappa_y (kappa_x + kappa_y) kappa_y (kappa_x + kappa_y)
- Score: 18.65143269806133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies bilinear saddle point problems $\min_{\bf{x}}
\max_{\bf{y}} g(\bf{x}) + \bf{x}^{\top} \bf{A} \bf{y} - h(\bf{y})$, where the
functions $g, h$ are smooth and strongly-convex. When the gradient and proximal
oracle related to $g$ and $h$ are accessible, optimal algorithms have already
been developed in the literature \cite{chambolle2011first,
palaniappan2016stochastic}. However, the proximal operator is not always easy
to compute, especially in constraint zero-sum matrix games
\cite{zhang2020sparsified}. This work proposes a new algorithm which only
requires the access to the gradients of $g, h$. Our algorithm achieves a
complexity upper bound $\tilde{\mathcal{O}}\left(
\frac{\|\bf{A}\|_2}{\sqrt{\mu_x \mu_y}} + \sqrt[4]{\kappa_x \kappa_y (\kappa_x
+ \kappa_y)} \right)$ which has optimal dependency on the coupling condition
number $\frac{\|\bf{A}\|_2}{\sqrt{\mu_x \mu_y}}$ up to logarithmic factors.
Related papers
- An Optimal Algorithm for Strongly Convex Min-min Optimization [79.11017157526815]
Existing optimal first-order methods require $mathcalO(sqrtmaxkappa_x,kappa_y log 1/epsilon)$ of computations of both $nabla_x f(x,y)$ and $nabla_y f(x,y)$.
We propose a new algorithm that only requires $mathcalO(sqrtkappa_x log 1/epsilon)$ of computations of $nabla_x f(x,
arXiv Detail & Related papers (2022-12-29T19:26:12Z) - The First Optimal Algorithm for Smooth and
Strongly-Convex-Strongly-Concave Minimax Optimization [88.91190483500932]
In this paper, we revisit the smooth and strongly-strongly-concave minimax optimization problem.
Existing state-of-the-art methods do not match lower bound $Omegaleft(sqrtkappa_xkappa_ylog3 (kappa_xkappa_y)logfrac1epsilonright)$.
We fix this fundamental issue by providing the first algorithm with $mathcalOleft( sqrtkappa_xkappa_ylog
arXiv Detail & Related papers (2022-05-11T17:33:07Z) - Sharper Rates for Separable Minimax and Finite Sum Optimization via
Primal-Dual Extragradient Methods [39.87865197769047]
We study separable minimax optimization problems $min_x max_y f(x) - g(y) + h(x, y)$, where $f$ and $g$ have smoothness and strong convexity parameters $(Lx, mux)$, $(Ly, muy)$, and $h$ is convex-concave with a $(Lambdaxx, Lambdaxy, Lambdayymuyright)$.
arXiv Detail & Related papers (2022-02-09T18:57:47Z) - On the Self-Penalization Phenomenon in Feature Selection [69.16452769334367]
We describe an implicit sparsity-inducing mechanism based on over a family of kernels.
As an application, we use this sparsity-inducing mechanism to build algorithms consistent for feature selection.
arXiv Detail & Related papers (2021-10-12T09:36:41Z) - Finding Second-Order Stationary Point for Nonconvex-Strongly-Concave
Minimax Problem [16.689304539024036]
In this paper, we consider non-asymotic behavior of finding second-order stationary point for minimax problem.
For high-dimensional problems, we propose anf to expensive cost form second-order oracle which solves the cubic sub-problem in gradient via descent and Chebyshev expansion.
arXiv Detail & Related papers (2021-10-10T14:54:23Z) - Near Optimal Stochastic Algorithms for Finite-Sum Unbalanced
Convex-Concave Minimax Optimization [41.432757205864796]
This paper considers first-order algorithms for convex-con minimax problems of the form $min_bf xmax_yf(bfbf y) simultaneously.
Our methods can be used to solve more general unbalanced minimax problems and are also near optimal.
arXiv Detail & Related papers (2021-06-03T11:30:32Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z) - Near-Optimal Algorithms for Minimax Optimization [115.21519161773287]
The paper presents the first with $tildeO(sqrtkappa_mathbf xkappa_mathbf)$, matching the design on logarithmic factors.
The paper also presents algorithms that match or outperform all existing methods in these settings in terms of complexity.
arXiv Detail & Related papers (2020-02-05T16:49:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.