The Landscape of the Proximal Point Method for Nonconvex-Nonconcave
Minimax Optimization
- URL: http://arxiv.org/abs/2006.08667v3
- Date: Thu, 1 Apr 2021 16:31:07 GMT
- Title: The Landscape of the Proximal Point Method for Nonconvex-Nonconcave
Minimax Optimization
- Authors: Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni
- Abstract summary: Minimax PPM has become a central tool in machine learning with applications in robust, reinforcement learning, GANs, etc.
These applications are often non-concave, but existing theory is unable to identify this and the fundamental difficulties.
- Score: 10.112779201155005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Minimax optimization has become a central tool in machine learning with
applications in robust optimization, reinforcement learning, GANs, etc. These
applications are often nonconvex-nonconcave, but the existing theory is unable
to identify and deal with the fundamental difficulties this poses. In this
paper, we study the classic proximal point method (PPM) applied to
nonconvex-nonconcave minimax problems. We find that a classic generalization of
the Moreau envelope by Attouch and Wets provides key insights. Critically, we
show this envelope not only smooths the objective but can convexify and
concavify it based on the level of interaction present between the minimizing
and maximizing variables. From this, we identify three distinct regions of
nonconvex-nonconcave problems. When interaction is sufficiently strong, we
derive global linear convergence guarantees. Conversely when the interaction is
fairly weak, we derive local linear convergence guarantees with a proper
initialization. Between these two settings, we show that PPM may diverge or
converge to a limit cycle.
Related papers
- Vanishing Point Estimation in Uncalibrated Images with Prior Gravity
Direction [82.72686460985297]
We tackle the problem of estimating a Manhattan frame.
We derive two new 2-line solvers, one of which does not suffer from singularities affecting existing solvers.
We also design a new non-minimal method, running on an arbitrary number of lines, to boost the performance in local optimization.
arXiv Detail & Related papers (2023-08-21T13:03:25Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Connected Superlevel Set in (Deep) Reinforcement Learning and its
Application to Minimax Theorems [15.632127097145881]
We show that the superlevel set of the objective function with respect to the policy parameter is always a connected set.
We show that the optimization objective as a function of the policy parameter and reward satisfies a stronger "equiconnectedness" property.
This is the first time such a result is established in the literature.
arXiv Detail & Related papers (2023-03-23T01:14:36Z) - STay-ON-the-Ridge: Guaranteed Convergence to Local Minimax Equilibrium
in Nonconvex-Nonconcave Games [34.04699387005116]
We propose a method that is guaranteed to converge min-max cycles for non-noncon objectives.
Our method is designed to satisfy the potential nature of the problem.
arXiv Detail & Related papers (2022-10-18T11:33:30Z) - Coordinate Descent Methods for Fractional Minimization [7.716156977428555]
We consider a class of structured fractional convex problems in which the numerator part objective is the sum of a differentiable convex non-linear function.
This problem is difficult since it is non-smooth convergence.
We propose two methods for solving this problem.
arXiv Detail & Related papers (2022-01-30T00:47:04Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Minimax
Problems [80.46370778277186]
Large scale convex-concave minimax problems arise in numerous applications, including game theory, robust training, and training of generative adversarial networks.
We develop a communication-efficient distributed extragrad algorithm, LocalAdaSient, with an adaptive learning rate suitable for solving convex-concave minimax problem in the.
Server model.
We demonstrate its efficacy through several experiments in both the homogeneous and heterogeneous settings.
arXiv Detail & Related papers (2021-06-18T09:42:05Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Non-convex Min-Max Optimization: Applications, Challenges, and Recent
Theoretical Advances [58.54078318403909]
The min-max problem, also known as the saddle point problem, is a class adversarial problem which is also studied in the context ofsum games.
arXiv Detail & Related papers (2020-06-15T05:33:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.