No-Regret Learning of Nash Equilibrium for Black-Box Games via Gaussian Processes
- URL: http://arxiv.org/abs/2405.08318v2
- Date: Thu, 14 Nov 2024 04:52:16 GMT
- Title: No-Regret Learning of Nash Equilibrium for Black-Box Games via Gaussian Processes
- Authors: Minbiao Han, Fengxue Zhang, Yuxin Chen,
- Abstract summary: This paper investigates the challenge of learning in black-box games, where the underlying utility function is unknown to any of the agents.
We provide a no-regret learning algorithm that utilizes Gaussian processes to identify the equilibrium in such games.
- Score: 11.846329468283583
- License:
- Abstract: This paper investigates the challenge of learning in black-box games, where the underlying utility function is unknown to any of the agents. While there is an extensive body of literature on the theoretical analysis of algorithms for computing the Nash equilibrium with complete information about the game, studies on Nash equilibrium in black-box games are less common. In this paper, we focus on learning the Nash equilibrium when the only available information about an agent's payoff comes in the form of empirical queries. We provide a no-regret learning algorithm that utilizes Gaussian processes to identify the equilibrium in such games. Our approach not only ensures a theoretical convergence rate but also demonstrates effectiveness across a variety collection of games through experimental validation.
Related papers
- Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - A survey on algorithms for Nash equilibria in finite normal-form games [15.76104985336285]
Nash equilibrium is one of the most influential solution concepts in game theory.
With the development of computer science and artificial intelligence, there is an increasing demand on Nash equilibrium computation.
This paper reviews various algorithms computing the Nash equilibrium and its approximation solutions in finite normal-form games from both theoretical and empirical perspectives.
arXiv Detail & Related papers (2023-12-18T10:00:47Z) - A Black-box Approach for Non-stationary Multi-agent Reinforcement Learning [53.83345471268163]
We investigate learning the equilibria in non-stationary multi-agent systems.
We show how to test for various types of equilibria by a black-box reduction to single-agent learning.
arXiv Detail & Related papers (2023-06-12T23:48:24Z) - Differentiable Arbitrating in Zero-sum Markov Games [59.62061049680365]
We study how to perturb the reward in a zero-sum Markov game with two players to induce a desirable Nash equilibrium, namely arbitrating.
The lower level requires solving the Nash equilibrium under a given reward function, which makes the overall problem challenging to optimize in an end-to-end way.
We propose a backpropagation scheme that differentiates through the Nash equilibrium, which provides the gradient feedback for the upper level.
arXiv Detail & Related papers (2023-02-20T16:05:04Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Multiplayer Performative Prediction: Learning in Decision-Dependent
Games [18.386569111954213]
This paper formulates a new game theoretic framework for multi-player performative prediction.
We focus on two distinct solution concepts, namely (i) performatively stable equilibria and (ii) Nash equilibria of the game.
We show that under mild assumptions, the performatively stable equilibria can be found efficiently by a variety of algorithms.
arXiv Detail & Related papers (2022-01-10T15:31:10Z) - Towards convergence to Nash equilibria in two-team zero-sum games [17.4461045395989]
Two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents.
We focus on the solution concept of Nash equilibria (NE)
We show that computing NE for this class of games is $textithard$ for the complexity class $mathrm$.
arXiv Detail & Related papers (2021-11-07T21:15:35Z) - Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games [78.65798135008419]
It remains vastly open how to learn the Stackelberg equilibrium in general-sum games efficiently from samples.
This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium in two-player turn-based general-sum games.
arXiv Detail & Related papers (2021-02-23T05:11:07Z) - No-regret learning and mixed Nash equilibria: They do not mix [64.37511607254115]
We study the dynamics of "follow-the-regularized-leader" (FTRL)
We show that any Nash equilibrium which is not strict cannot be stable and attracting under FTRL.
This result has significant implications for predicting the outcome of a learning process.
arXiv Detail & Related papers (2020-10-19T13:49:06Z) - Algorithm for Computing Approximate Nash Equilibrium in Continuous Games
with Application to Continuous Blotto [1.7132914341329848]
We present a new algorithm for approximating Nash equilibrium strategies in continuous games.
In addition to two-player zero-sum games, our algorithm also applies to multiplayer games and games with imperfect information.
arXiv Detail & Related papers (2020-06-12T19:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.