On the Nash equilibrium of moment-matching GANs for stationary Gaussian
processes
- URL: http://arxiv.org/abs/2203.07136v1
- Date: Mon, 14 Mar 2022 14:30:23 GMT
- Title: On the Nash equilibrium of moment-matching GANs for stationary Gaussian
processes
- Authors: Sixin Zhang
- Abstract summary: We show that the existence of consistent Nash equilibrium depends crucially on the choice of the discriminator family.
We further study the local stability and global convergence of gradient descent-ascent methods towards consistent equilibrium.
- Score: 2.25477613430341
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) learn an implicit generative model
from data samples through a two-player game. In this paper, we study the
existence of Nash equilibrium of the game which is consistent as the number of
data samples grows to infinity. In a realizable setting where the goal is to
estimate the ground-truth generator of a stationary Gaussian process, we show
that the existence of consistent Nash equilibrium depends crucially on the
choice of the discriminator family. The discriminator defined from second-order
statistical moments can result in non-existence of Nash equilibrium, existence
of consistent non-Nash equilibrium, or existence and uniqueness of consistent
Nash equilibrium, depending on whether symmetry properties of the generator
family are respected. We further study the local stability and global
convergence of gradient descent-ascent methods towards consistent equilibrium.
Related papers
- Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - PAPAL: A Provable PArticle-based Primal-Dual ALgorithm for Mixed Nash Equilibrium [58.26573117273626]
We consider the non-AL equilibrium nonconptotic objective function in two-player zero-sum continuous games.
Our novel insights into the particle-based algorithms for continuous distribution strategies are presented.
arXiv Detail & Related papers (2023-03-02T05:08:15Z) - Decentralized Policy Gradient for Nash Equilibria Learning of
General-sum Stochastic Games [8.780797886160402]
We study Nash equilibria learning of a general-sum game with an unknown transition probability density function.
For the case with exact pseudo gradients, we design a two-loop algorithm by the equivalence of Nash equilibrium and variational inequality problems.
arXiv Detail & Related papers (2022-10-14T09:09:56Z) - Global Convergence of Over-parameterized Deep Equilibrium Models [52.65330015267245]
A deep equilibrium model (DEQ) is implicitly defined through an equilibrium point of an infinite-depth weight-tied model with an input-injection.
Instead of infinite computations, it solves an equilibrium point directly with root-finding and computes gradients with implicit differentiation.
We propose a novel probabilistic framework to overcome the technical difficulty in the non-asymptotic analysis of infinite-depth weight-tied models.
arXiv Detail & Related papers (2022-05-27T08:00:13Z) - Nonparametric Conditional Local Independence Testing [69.31200003384122]
Conditional local independence is an independence relation among continuous time processes.
No nonparametric test of conditional local independence has been available.
We propose such a nonparametric test based on double machine learning.
arXiv Detail & Related papers (2022-03-25T10:31:02Z) - Provably convergent quasistatic dynamics for mean-field two-player
zero-sum games [10.39511271647025]
We consider a quasistatic Wasserstein gradient flow dynamics in which one probability distribution follows the Wasserstein gradient flow, while the other one is always at the equilibrium.
Inspired by the continuous dynamics of probability distributions, we derive a quasistatic Langevin gradient descent method with inner-outer iterations.
arXiv Detail & Related papers (2022-02-15T20:19:42Z) - Characterizing GAN Convergence Through Proximal Duality Gap [3.0724051098062097]
We show theoretically that the proximal duality gap is capable of monitoring the convergence of GANs to a wider spectrum of equilibria.
We also establish the relationship between the proximal duality gap and the divergence between the real and generated data distributions for different GAN formulations.
arXiv Detail & Related papers (2021-05-11T06:27:27Z) - Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games [78.65798135008419]
It remains vastly open how to learn the Stackelberg equilibrium in general-sum games efficiently from samples.
This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium in two-player turn-based general-sum games.
arXiv Detail & Related papers (2021-02-23T05:11:07Z) - Survival of the strictest: Stable and unstable equilibria under
regularized learning with partial information [32.384868685390906]
We examine the Nash equilibrium convergence properties of no-regret learning in general N-player games.
We establish a comprehensive equivalence between the stability of a Nash equilibrium and its support.
It provides a clear refinement criterion for the prediction of the day-to-day behavior of no-regret learning in games.
arXiv Detail & Related papers (2021-01-12T18:55:11Z) - A mean-field analysis of two-player zero-sum games [46.8148496944294]
Mixed Nash equilibria exist in greater generality and may be found using mirror descent.
We study this dynamics as an interacting gradient flow over measure spaces endowed with the Wasserstein-Fisher-Rao metric.
Our method identifies mixed equilibria in high dimensions and is demonstrably effective for training mixtures of GANs.
arXiv Detail & Related papers (2020-02-14T22:46:35Z) - Learning in Discounted-cost and Average-cost Mean-field Games [0.0]
We consider learning approximate Nash equilibria for discrete-time mean-field games with nonlinear state dynamics.
We first prove that this operator is a contraction, and propose a learning algorithm to compute an approximate mean-field equilibrium.
We then show that the learned mean-field equilibrium constitutes an approximate Nash equilibrium for finite-agent games.
arXiv Detail & Related papers (2019-12-31T14:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.