Non-convex Min-Max Optimization: Applications, Challenges, and Recent
Theoretical Advances
- URL: http://arxiv.org/abs/2006.08141v2
- Date: Tue, 18 Aug 2020 08:25:59 GMT
- Title: Non-convex Min-Max Optimization: Applications, Challenges, and Recent
Theoretical Advances
- Authors: Meisam Razaviyayn, Tianjian Huang, Songtao Lu, Maher Nouiehed, Maziar
Sanjabi, Mingyi Hong
- Abstract summary: The min-max problem, also known as the saddle point problem, is a class adversarial problem which is also studied in the context ofsum games.
- Score: 58.54078318403909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The min-max optimization problem, also known as the saddle point problem, is
a classical optimization problem which is also studied in the context of
zero-sum games. Given a class of objective functions, the goal is to find a
value for the argument which leads to a small objective value even for the
worst case function in the given class. Min-max optimization problems have
recently become very popular in a wide range of signal and data processing
applications such as fair beamforming, training generative adversarial networks
(GANs), and robust machine learning, to just name a few. The overarching goal
of this article is to provide a survey of recent advances for an important
subclass of min-max problem, where the minimization and maximization problems
can be non-convex and/or non-concave. In particular, we will first present a
number of applications to showcase the importance of such min-max problems;
then we discuss key theoretical challenges, and provide a selective review of
some exciting recent theoretical and algorithmic advances in tackling
non-convex min-max problems. Finally, we will point out open questions and
future research directions.
Related papers
- Statistical Mechanics of Min-Max Problems [0.0]
This study introduces a statistical mechanical formalism for analyzing the equilibrium values of min-max problems in the high-dimensional limit.
As a first step, we apply this formalism to bi-linear min-max games and simple GANs.
This formalism provides a groundwork for a deeper theoretical analysis of the equilibrium properties in various machine learning methods.
arXiv Detail & Related papers (2024-09-09T20:24:19Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Decentralized Personalized Federated Learning for Min-Max Problems [79.61785798152529]
This paper is the first to study PFL for saddle point problems encompassing a broader range of optimization problems.
We propose new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly) convex-(strongly) concave saddle point problems.
Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-06-14T10:36:25Z) - A Decentralized Adaptive Momentum Method for Solving a Class of Min-Max
Optimization Problems [9.653157073271021]
We develop a decentralized adaptive momentum (ADAM)-type algorithm for solving min-max optimization problem.
We obtain non-asymptotic rates of convergence of the proposed algorithm for finding a (stochastic) first-order Nash equilibrium point.
arXiv Detail & Related papers (2021-06-10T22:32:01Z) - Generative Minimization Networks: Training GANs Without Competition [34.808210988732405]
Recent applications of generative models, particularly GANs, have triggered interest in solving min-max games for which standard optimization techniques are often not suitable.
We provide novel convergence guarantees on this objective and demonstrate why the obtained limit point solves the problem better than known techniques.
arXiv Detail & Related papers (2021-03-23T17:01:08Z) - Direct-Search for a Class of Stochastic Min-Max Problems [0.0]
We investigate the use of derivative-search methods that only access objective through oracle.
We prove convergence of this technique under mild assumptions.
Our analysis is the first one to address the convergence of a direct-search method for minmax objectives in a setting.
arXiv Detail & Related papers (2021-02-22T22:23:58Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.