Personalized incentives as feedback design in generalized Nash
equilibrium problems
- URL: http://arxiv.org/abs/2203.12948v3
- Date: Mon, 22 May 2023 07:34:01 GMT
- Title: Personalized incentives as feedback design in generalized Nash
equilibrium problems
- Authors: Filippo Fabiani, Andrea Simonetto, Paul J. Goulart
- Abstract summary: We investigate stationary and time-varying, nonmonotone generalized Nash equilibrium problems.
We design a semi-decentralized Nash equilibrium seeking algorithm.
We consider the ridehailing service provided by several companies as a service orchestration.
- Score: 6.10183951877597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate both stationary and time-varying, nonmonotone generalized Nash
equilibrium problems that exhibit symmetric interactions among the agents,
which are known to be potential. As may happen in practical cases, however, we
envision a scenario in which the formal expression of the underlying potential
function is not available, and we design a semi-decentralized Nash equilibrium
seeking algorithm. In the proposed two-layer scheme, a coordinator iteratively
integrates the (possibly noisy and sporadic) agents' feedback to learn the
pseudo-gradients of the agents, and then design personalized incentives for
them. On their side, the agents receive those personalized incentives, compute
a solution to an extended game, and then return feedback measurements to the
coordinator. In the stationary setting, our algorithm returns a Nash
equilibrium in case the coordinator is endowed with standard learning policies,
while it returns a Nash equilibrium up to a constant, yet adjustable, error in
the time-varying case. As a motivating application, we consider the ridehailing
service provided by several companies with mobility as a service orchestration,
necessary to both handle competition among firms and avoid traffic congestion,
which is also adopted to run numerical experiments verifying our results.
Related papers
- An Online Feasible Point Method for Benign Generalized Nash Equilibrium Problems [4.243592852049963]
We introduce a new online feasible point method for generalized Nash equilibrium games.
Under the assumption that limited communication between the agents is allowed, this method guarantees feasibility.
We identify the class of benign generalized Nash equilibrium problems, for which the convergence of our method to the equilibrium is guaranteed.
arXiv Detail & Related papers (2024-10-03T11:27:55Z) - Differentiable Arbitrating in Zero-sum Markov Games [59.62061049680365]
We study how to perturb the reward in a zero-sum Markov game with two players to induce a desirable Nash equilibrium, namely arbitrating.
The lower level requires solving the Nash equilibrium under a given reward function, which makes the overall problem challenging to optimize in an end-to-end way.
We propose a backpropagation scheme that differentiates through the Nash equilibrium, which provides the gradient feedback for the upper level.
arXiv Detail & Related papers (2023-02-20T16:05:04Z) - Game-Theoretical Perspectives on Active Equilibria: A Preferred Solution
Concept over Nash Equilibria [61.093297204685264]
An effective approach in multiagent reinforcement learning is to consider the learning process of agents and influence their future policies.
This new solution concept is general such that standard solution concepts, such as a Nash equilibrium, are special cases of active equilibria.
We analyze active equilibria from a game-theoretic perspective by closely studying examples where Nash equilibria are known.
arXiv Detail & Related papers (2022-10-28T14:45:39Z) - Decentralized Policy Gradient for Nash Equilibria Learning of
General-sum Stochastic Games [8.780797886160402]
We study Nash equilibria learning of a general-sum game with an unknown transition probability density function.
For the case with exact pseudo gradients, we design a two-loop algorithm by the equivalence of Nash equilibrium and variational inequality problems.
arXiv Detail & Related papers (2022-10-14T09:09:56Z) - On the Nash equilibrium of moment-matching GANs for stationary Gaussian
processes [2.25477613430341]
We show that the existence of consistent Nash equilibrium depends crucially on the choice of the discriminator family.
We further study the local stability and global convergence of gradient descent-ascent methods towards consistent equilibrium.
arXiv Detail & Related papers (2022-03-14T14:30:23Z) - Learning equilibria with personalized incentives in a class of
nonmonotone games [7.713240800142863]
We consider quadratic, nonmonotone generalized Nash equilibrium problems with symmetric interactions among the agents, which are known to be potential.
In the proposed scheme, a coordinator iteratively integrates the noisy agents' feedback to learn the pseudo-gradients of the agents, and then design personalized incentives for them.
We show that our algorithm returns an equilibrium in case the coordinator is endowed with standard learning policies, and corroborate our results on a numerical instance of a hypomonotone game.
arXiv Detail & Related papers (2021-11-06T11:18:59Z) - Inducing Equilibria via Incentives: Simultaneous Design-and-Play Finds
Global Optima [114.31577038081026]
We propose an efficient method that tackles the designer's and agents' problems simultaneously in a single loop.
Although the designer does not solve the equilibrium problem repeatedly, it can anticipate the overall influence of the incentives on the agents.
We prove that the algorithm converges to the global optima at a sublinear rate for a broad class of games.
arXiv Detail & Related papers (2021-10-04T06:53:59Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - On Information Asymmetry in Competitive Multi-Agent Reinforcement
Learning: Convergence and Optimality [78.76529463321374]
We study the system of interacting non-cooperative two Q-learning agents.
We show that this information asymmetry can lead to a stable outcome of population learning.
arXiv Detail & Related papers (2020-10-21T11:19:53Z) - Distributing entanglement with separable states: assessment of encoding
and decoding imperfections [55.41644538483948]
Entanglement can be distributed using a carrier which is always separable from the rest of the systems involved.
We consider the effect of incoherent dynamics acting alongside imperfect unitary interactions.
We show that entanglement gain is possible even with substantial unitary errors.
arXiv Detail & Related papers (2020-02-11T15:25:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.