A Limited-Capacity Minimax Theorem for Non-Convex Games or: How I
Learned to Stop Worrying about Mixed-Nash and Love Neural Nets
- URL: http://arxiv.org/abs/2002.05820v3
- Date: Mon, 15 Mar 2021 21:37:08 GMT
- Title: A Limited-Capacity Minimax Theorem for Non-Convex Games or: How I
Learned to Stop Worrying about Mixed-Nash and Love Neural Nets
- Authors: Gauthier Gidel, David Balduzzi, Wojciech Marian Czarnecki, Marta
Garnelo and Yoram Bachrach
- Abstract summary: Adrial training, a special case of multi-objective optimization, is an increasingly prevalent machine learning technique.
GAN-based generativeplay techniques have been applied to Poker games such as Go.
- Score: 29.606063890097275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training, a special case of multi-objective optimization, is an
increasingly prevalent machine learning technique: some of its most notable
applications include GAN-based generative modeling and self-play techniques in
reinforcement learning which have been applied to complex games such as Go or
Poker. In practice, a \emph{single} pair of networks is typically trained in
order to find an approximate equilibrium of a highly nonconcave-nonconvex
adversarial problem. However, while a classic result in game theory states such
an equilibrium exists in concave-convex games, there is no analogous guarantee
if the payoff is nonconcave-nonconvex. Our main contribution is to provide an
approximate minimax theorem for a large class of games where the players pick
neural networks including WGAN, StarCraft II, and Blotto Game. Our findings
rely on the fact that despite being nonconcave-nonconvex with respect to the
neural networks parameters, these games are concave-convex with respect to the
actual models (e.g., functions or distributions) represented by these neural
networks.
Related papers
- Game Theory Meets Statistical Mechanics in Deep Learning Design [0.06990493129893112]
We present a novel deep representation that seamlessly merges principles of game theory with laws of statistical mechanics.
It performs feature extraction, dimensionality reduction, and pattern classification within a single learning framework.
arXiv Detail & Related papers (2024-10-16T06:02:18Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - On Tractable $Φ$-Equilibria in Non-Concave Games [53.212133025684224]
Non-concave games introduce significant game-theoretic and optimization challenges.
We show that when $Phi$ is finite, there exists an efficient uncoupled learning algorithm that converges to the corresponding $Phi$-equilibria.
We also show that Online Gradient Descent can efficiently approximate $Phi$-equilibria in non-trivial regimes.
arXiv Detail & Related papers (2024-03-13T01:51:30Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Solving Min-Max Optimization with Hidden Structure via Gradient Descent
Ascent [40.99558326586079]
A class of non-concave zero-sum games functions hidden zero-sum games.
We prove convergence to the von-Neumann equilibrium of the "hidden" convex-concave game.
Our guarantees are non-local, which as far as we know is a first-of-its-kind type of result in non-concave games.
arXiv Detail & Related papers (2021-01-13T18:13:49Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.