GANs May Have No Nash Equilibria
- URL: http://arxiv.org/abs/2002.09124v1
- Date: Fri, 21 Feb 2020 04:30:05 GMT
- Title: GANs May Have No Nash Equilibria
- Authors: Farzan Farnia, Asuman Ozdaglar
- Abstract summary: Generative adversarial networks (GANs) represent a zero-sum game between two machine players, a generator and a discriminator.
We show through several theoretical and numerical results that indeed GAN zero-sum games may not have any local Nash equilibria.
We propose a new approach, which we call proximal training, for solving GAN problems.
- Score: 12.691047660244331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) represent a zero-sum game between two
machine players, a generator and a discriminator, designed to learn the
distribution of data. While GANs have achieved state-of-the-art performance in
several benchmark learning tasks, GAN minimax optimization still poses great
theoretical and empirical challenges. GANs trained using first-order
optimization methods commonly fail to converge to a stable solution where the
players cannot improve their objective, i.e., the Nash equilibrium of the
underlying game. Such issues raise the question of the existence of Nash
equilibrium solutions in the GAN zero-sum game. In this work, we show through
several theoretical and numerical results that indeed GAN zero-sum games may
not have any local Nash equilibria. To characterize an equilibrium notion
applicable to GANs, we consider the equilibrium of a new zero-sum game with an
objective function given by a proximal operator applied to the original
objective, a solution we call the proximal equilibrium. Unlike the Nash
equilibrium, the proximal equilibrium captures the sequential nature of GANs,
in which the generator moves first followed by the discriminator. We prove that
the optimal generative model in Wasserstein GAN problems provides a proximal
equilibrium. Inspired by these results, we propose a new approach, which we
call proximal training, for solving GAN problems. We discuss several numerical
experiments demonstrating the existence of proximal equilibrium solutions in
GAN minimax problems.
Related papers
- PAPAL: A Provable PArticle-based Primal-Dual ALgorithm for Mixed Nash
Equilibrium [62.51015395213579]
We consider the non-AL equilibrium nonconptotic objective function in two-player zero-sum continuous games.
The proposed algorithm employs the movements of particles to represent the updates of random strategies for the $ilon$-mixed Nash equilibrium.
arXiv Detail & Related papers (2023-03-02T05:08:15Z) - Differentiable Arbitrating in Zero-sum Markov Games [59.62061049680365]
We study how to perturb the reward in a zero-sum Markov game with two players to induce a desirable Nash equilibrium, namely arbitrating.
The lower level requires solving the Nash equilibrium under a given reward function, which makes the overall problem challenging to optimize in an end-to-end way.
We propose a backpropagation scheme that differentiates through the Nash equilibrium, which provides the gradient feedback for the upper level.
arXiv Detail & Related papers (2023-02-20T16:05:04Z) - Global Nash Equilibrium in Non-convex Multi-player Game: Theory and
Algorithms [66.8634598612777]
We show that Nash equilibrium (NE) is acceptable to all players in a multi-player game.
We also show that no one can benefit unilaterally from the general theory step by step.
arXiv Detail & Related papers (2023-01-19T11:36:50Z) - Game-Theoretical Perspectives on Active Equilibria: A Preferred Solution
Concept over Nash Equilibria [61.093297204685264]
An effective approach in multiagent reinforcement learning is to consider the learning process of agents and influence their future policies.
This new solution concept is general such that standard solution concepts, such as a Nash equilibrium, are special cases of active equilibria.
We analyze active equilibria from a game-theoretic perspective by closely studying examples where Nash equilibria are known.
arXiv Detail & Related papers (2022-10-28T14:45:39Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - On the Nash equilibrium of moment-matching GANs for stationary Gaussian
processes [2.25477613430341]
We show that the existence of consistent Nash equilibrium depends crucially on the choice of the discriminator family.
We further study the local stability and global convergence of gradient descent-ascent methods towards consistent equilibrium.
arXiv Detail & Related papers (2022-03-14T14:30:23Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Learning to Compute Approximate Nash Equilibrium for Normal-form Games [15.321036952379488]
We propose a general meta learning approach to computing approximate Nash equilibrium for finite $n$-player normal-form games.
Unlike existing solutions that approximate or learn a Nash equilibrium from scratch for each of the games, our meta solver directly constructs a mapping from a game utility matrix to a joint strategy profile.
arXiv Detail & Related papers (2021-08-17T07:06:46Z) - Characterizing GAN Convergence Through Proximal Duality Gap [3.0724051098062097]
We show theoretically that the proximal duality gap is capable of monitoring the convergence of GANs to a wider spectrum of equilibria.
We also establish the relationship between the proximal duality gap and the divergence between the real and generated data distributions for different GAN formulations.
arXiv Detail & Related papers (2021-05-11T06:27:27Z) - Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games [78.65798135008419]
It remains vastly open how to learn the Stackelberg equilibrium in general-sum games efficiently from samples.
This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium in two-player turn-based general-sum games.
arXiv Detail & Related papers (2021-02-23T05:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.