Decentralized Policy Gradient for Nash Equilibria Learning of
General-sum Stochastic Games
- URL: http://arxiv.org/abs/2210.07651v2
- Date: Tue, 18 Oct 2022 05:23:19 GMT
- Title: Decentralized Policy Gradient for Nash Equilibria Learning of
General-sum Stochastic Games
- Authors: Yan Chen and Tao Li
- Abstract summary: We study Nash equilibria learning of a general-sum game with an unknown transition probability density function.
For the case with exact pseudo gradients, we design a two-loop algorithm by the equivalence of Nash equilibrium and variational inequality problems.
- Score: 8.780797886160402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study Nash equilibria learning of a general-sum stochastic game with an
unknown transition probability density function. Agents take actions at the
current environment state and their joint action influences the transition of
the environment state and their immediate rewards. Each agent only observes the
environment state and its own immediate reward and is unknown about the actions
or immediate rewards of others. We introduce the concepts of weighted
asymptotic Nash equilibrium with probability 1 and in probability. For the case
with exact pseudo gradients, we design a two-loop algorithm by the equivalence
of Nash equilibrium and variational inequality problems. In the outer loop, we
sequentially update a constructed strongly monotone variational inequality by
updating a proximal parameter while employing a single-call extra-gradient
algorithm in the inner loop for solving the constructed variational inequality.
We show that if the associated Minty variational inequality has a solution,
then the designed algorithm converges to the k^{1/2}-weighted asymptotic Nash
equilibrium. Further, for the case with unknown pseudo gradients, we propose a
decentralized algorithm, where the G(PO)MDP gradient estimator of the pseudo
gradient is provided by Monte-Carlo simulations. The convergence to the k^{1/4}
-weighted asymptotic Nash equilibrium in probability is achieved.
Related papers
- PAPAL: A Provable PArticle-based Primal-Dual ALgorithm for Mixed Nash
Equilibrium [62.51015395213579]
We consider the non-AL equilibrium nonconptotic objective function in two-player zero-sum continuous games.
The proposed algorithm employs the movements of particles to represent the updates of random strategies for the $ilon$-mixed Nash equilibrium.
arXiv Detail & Related papers (2023-03-02T05:08:15Z) - Differentiable Arbitrating in Zero-sum Markov Games [59.62061049680365]
We study how to perturb the reward in a zero-sum Markov game with two players to induce a desirable Nash equilibrium, namely arbitrating.
The lower level requires solving the Nash equilibrium under a given reward function, which makes the overall problem challenging to optimize in an end-to-end way.
We propose a backpropagation scheme that differentiates through the Nash equilibrium, which provides the gradient feedback for the upper level.
arXiv Detail & Related papers (2023-02-20T16:05:04Z) - Global Convergence of Over-parameterized Deep Equilibrium Models [52.65330015267245]
A deep equilibrium model (DEQ) is implicitly defined through an equilibrium point of an infinite-depth weight-tied model with an input-injection.
Instead of infinite computations, it solves an equilibrium point directly with root-finding and computes gradients with implicit differentiation.
We propose a novel probabilistic framework to overcome the technical difficulty in the non-asymptotic analysis of infinite-depth weight-tied models.
arXiv Detail & Related papers (2022-05-27T08:00:13Z) - Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games [16.09467599829253]
We study the problem of finding Nash equilibrium in a two-player zero-sum game.
Our main contribution is to show that under proper choices of the regularization parameter, the gradient descents to the Nash equilibrium of the original unregularized problem.
arXiv Detail & Related papers (2022-05-27T03:24:12Z) - First-Order Algorithms for Nonlinear Generalized Nash Equilibrium
Problems [88.58409977434269]
We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs)
Our contribution is to provide two simple first-order algorithmic frameworks based on the quadratic penalty method and the augmented Lagrangian method.
We provide nonasymptotic theoretical guarantees for these algorithms.
arXiv Detail & Related papers (2022-04-07T00:11:05Z) - On the Nash equilibrium of moment-matching GANs for stationary Gaussian
processes [2.25477613430341]
We show that the existence of consistent Nash equilibrium depends crucially on the choice of the discriminator family.
We further study the local stability and global convergence of gradient descent-ascent methods towards consistent equilibrium.
arXiv Detail & Related papers (2022-03-14T14:30:23Z) - Provably convergent quasistatic dynamics for mean-field two-player
zero-sum games [10.39511271647025]
We consider a quasistatic Wasserstein gradient flow dynamics in which one probability distribution follows the Wasserstein gradient flow, while the other one is always at the equilibrium.
Inspired by the continuous dynamics of probability distributions, we derive a quasistatic Langevin gradient descent method with inner-outer iterations.
arXiv Detail & Related papers (2022-02-15T20:19:42Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Learning Nash Equilibria in Zero-Sum Stochastic Games via
Entropy-Regularized Policy Approximation [18.35524179586723]
We explore the use of policy approximations to reduce the computational cost of learning Nash equilibria in zero-sum games.
We propose a new Q-learning type algorithm that uses a sequence of entropy-regularized soft policies to approximate the Nash policy.
We prove that under certain conditions, by updating the regularized Q-function, the algorithm converges to a Nash equilibrium.
arXiv Detail & Related papers (2020-09-01T01:03:44Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.