Learning equilibria with personalized incentives in a class of
nonmonotone games
- URL: http://arxiv.org/abs/2111.03854v1
- Date: Sat, 6 Nov 2021 11:18:59 GMT
- Title: Learning equilibria with personalized incentives in a class of
nonmonotone games
- Authors: Filippo Fabiani, Andrea Simonetto and Paul J. Goulart
- Abstract summary: We consider quadratic, nonmonotone generalized Nash equilibrium problems with symmetric interactions among the agents, which are known to be potential.
In the proposed scheme, a coordinator iteratively integrates the noisy agents' feedback to learn the pseudo-gradients of the agents, and then design personalized incentives for them.
We show that our algorithm returns an equilibrium in case the coordinator is endowed with standard learning policies, and corroborate our results on a numerical instance of a hypomonotone game.
- Score: 7.713240800142863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider quadratic, nonmonotone generalized Nash equilibrium problems with
symmetric interactions among the agents, which are known to be potential. As
may happen in practical cases, we envision a scenario in which an explicit
expression of the underlying potential function is not available, and we design
a two-layer Nash equilibrium seeking algorithm. In the proposed scheme, a
coordinator iteratively integrates the noisy agents' feedback to learn the
pseudo-gradients of the agents, and then design personalized incentives for
them. On their side, the agents receive those personalized incentives, compute
a solution to an extended game, and then return feedback measures to the
coordinator. We show that our algorithm returns an equilibrium in case the
coordinator is endowed with standard learning policies, and corroborate our
results on a numerical instance of a hypomonotone game.
Related papers
- Independent Learning in Constrained Markov Potential Games [19.083595175045073]
Constrained Markov games offer a formal framework for modeling multi-agent reinforcement learning problems.
We propose an independent policy gradient algorithm for learning approximate constrained Nash equilibria.
arXiv Detail & Related papers (2024-02-27T20:57:35Z) - Differentiable Arbitrating in Zero-sum Markov Games [59.62061049680365]
We study how to perturb the reward in a zero-sum Markov game with two players to induce a desirable Nash equilibrium, namely arbitrating.
The lower level requires solving the Nash equilibrium under a given reward function, which makes the overall problem challenging to optimize in an end-to-end way.
We propose a backpropagation scheme that differentiates through the Nash equilibrium, which provides the gradient feedback for the upper level.
arXiv Detail & Related papers (2023-02-20T16:05:04Z) - Offline Learning in Markov Games with General Function Approximation [22.2472618685325]
We study offline multi-agent reinforcement learning (RL) in Markov games.
We provide the first framework for sample-efficient offline learning in Markov games.
arXiv Detail & Related papers (2023-02-06T05:22:27Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Game-Theoretical Perspectives on Active Equilibria: A Preferred Solution
Concept over Nash Equilibria [61.093297204685264]
An effective approach in multiagent reinforcement learning is to consider the learning process of agents and influence their future policies.
This new solution concept is general such that standard solution concepts, such as a Nash equilibrium, are special cases of active equilibria.
We analyze active equilibria from a game-theoretic perspective by closely studying examples where Nash equilibria are known.
arXiv Detail & Related papers (2022-10-28T14:45:39Z) - Personalized incentives as feedback design in generalized Nash
equilibrium problems [6.10183951877597]
We investigate stationary and time-varying, nonmonotone generalized Nash equilibrium problems.
We design a semi-decentralized Nash equilibrium seeking algorithm.
We consider the ridehailing service provided by several companies as a service orchestration.
arXiv Detail & Related papers (2022-03-24T09:24:29Z) - Inducing Equilibria via Incentives: Simultaneous Design-and-Play Finds
Global Optima [114.31577038081026]
We propose an efficient method that tackles the designer's and agents' problems simultaneously in a single loop.
Although the designer does not solve the equilibrium problem repeatedly, it can anticipate the overall influence of the incentives on the agents.
We prove that the algorithm converges to the global optima at a sublinear rate for a broad class of games.
arXiv Detail & Related papers (2021-10-04T06:53:59Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - On Information Asymmetry in Competitive Multi-Agent Reinforcement
Learning: Convergence and Optimality [78.76529463321374]
We study the system of interacting non-cooperative two Q-learning agents.
We show that this information asymmetry can lead to a stable outcome of population learning.
arXiv Detail & Related papers (2020-10-21T11:19:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.