Approximating Nash Equilibrium in Random Graphical Games
- URL: http://arxiv.org/abs/2112.03442v1
- Date: Tue, 7 Dec 2021 01:40:20 GMT
- Title: Approximating Nash Equilibrium in Random Graphical Games
- Authors: Morris Yau
- Abstract summary: Nash equilibrium in multi-agent games is a longstanding challenge at the interface of game theory and computer science.
We provide a quasipolynomial time approximation scheme (QPTAS) for computing an epsilon approximate nash equilibrium of polymatrix games on random graphs with edge density greater than poly(k, 1/epsilon, ln(N))$ with high probability.
With the same runtime we can compute an epsilon-approximate Nash equilibrium that epsilon-approximates the maximum social welfare of any nash equilibrium of the game.
- Score: 3.42658286826597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computing Nash equilibrium in multi-agent games is a longstanding challenge
at the interface of game theory and computer science. It is well known that a
general normal form game in N players and k strategies requires exponential
space simply to write down. This Curse of Multi-Agents prompts the study of
succinct games which can be written down efficiently. A canonical example of a
succinct game is the graphical game which models players as nodes in a graph
interacting with only their neighbors in direct analogy with markov random
fields. Graphical games have found applications in wireless, financial, and
social networks. However, computing the nash equilbrium of graphical games has
proven challenging. Even for polymatrix games, a model where payoffs to an
agent can be written as the sum of payoffs of interactions with the agent's
neighbors, it has been shown that computing an epsilon approximate nash
equilibrium is PPAD hard for epsilon smaller than a constant. The focus of this
work is to circumvent this computational hardness by considering average case
graph models i.e random graphs. We provide a quasipolynomial time approximation
scheme (QPTAS) for computing an epsilon approximate nash equilibrium of
polymatrix games on random graphs with edge density greater than poly(k,
1/epsilon, ln(N))$ with high probability. Furthermore, with the same runtime we
can compute an epsilon-approximate Nash equilibrium that epsilon-approximates
the maximum social welfare of any nash equilibrium of the game. Our primary
technical innovation is an "accelerated rounding" of a novel hierarchical
convex program for the nash equilibrium problem. Our accelerated rounding also
yields faster algorithms for Max-2CSP on the same family of random graphs,
which may be of independent interest.
Related papers
- Optimistic Policy Gradient in Multi-Player Markov Games with a Single
Controller: Convergence Beyond the Minty Property [89.96815099996132]
We develop a new framework to characterize optimistic policy gradient methods in multi-player games with a single controller.
Our approach relies on a natural generalization of the classical Minty property that we introduce, which we anticipate to have further applications beyond Markov games.
arXiv Detail & Related papers (2023-12-19T11:34:10Z) - Hardness of Independent Learning and Sparse Equilibrium Computation in
Markov Games [70.19141208203227]
We consider the problem of decentralized multi-agent reinforcement learning in Markov games.
We show that no algorithm attains no-regret in general-sum games when executed independently by all players.
We show that our lower bounds hold even for seemingly easier setting in which all agents are controlled by a centralized algorithm.
arXiv Detail & Related papers (2023-03-22T03:28:12Z) - Global Nash Equilibrium in Non-convex Multi-player Game: Theory and
Algorithms [66.8634598612777]
We show that Nash equilibrium (NE) is acceptable to all players in a multi-player game.
We also show that no one can benefit unilaterally from the general theory step by step.
arXiv Detail & Related papers (2023-01-19T11:36:50Z) - Representation Learning for General-sum Low-rank Markov Games [63.119870889883224]
We study multi-agent general-sum Markov games with nonlinear function approximation.
We focus on low-rank Markov games whose transition matrix admits a hidden low-rank structure on top of an unknown non-linear representation.
arXiv Detail & Related papers (2022-10-30T22:58:22Z) - Learning Correlated Equilibria in Mean-Field Games [62.14589406821103]
We develop the concepts of Mean-Field correlated and coarse-correlated equilibria.
We show that they can be efficiently learnt in emphall games, without requiring any additional assumption on the structure of the game.
arXiv Detail & Related papers (2022-08-22T08:31:46Z) - Nash, Conley, and Computation: Impossibility and Incompleteness in Game
Dynamics [28.815822236291392]
We show that no game dynamics can converge to $epsilon$-Nash equilibria.
We also prove a stronger result for $epsilon$-approximate Nash equilibria.
arXiv Detail & Related papers (2022-03-26T18:27:40Z) - Independent Policy Gradient for Large-Scale Markov Potential Games:
Sharper Rates, Function Approximation, and Game-Agnostic Convergence [30.084357461497042]
We learn a Nash equilibrium of an MPG in which the size of state space and/or the number of players can be very large.
We propose new independent policy gradient algorithms that are run by all players in tandem.
We identify a class of independent policy gradient algorithms that enjoys convergence for both zero-sum Markov games and Markov cooperative games with the players that are oblivious to the types of games being played.
arXiv Detail & Related papers (2022-02-08T20:09:47Z) - Learning Graphon Mean Field Games and Approximate Nash Equilibria [33.77849245250632]
We propose a novel discrete-time formulation for graphon mean field games with weak interaction.
On the theoretical side, we give extensive and rigorous existence and approximation properties of the graphon mean field solution.
We successfully obtain plausible approximate Nash equilibria in otherwise infeasible large dense graph games with many agents.
arXiv Detail & Related papers (2021-11-29T16:16:11Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.