Coordination and equilibrium selection in games: the role of local
effects
- URL: http://arxiv.org/abs/2110.10036v1
- Date: Tue, 19 Oct 2021 15:05:29 GMT
- Title: Coordination and equilibrium selection in games: the role of local
effects
- Authors: Tomasz Raducha and Maxi San Miguel
- Abstract summary: We study the role of local effects and finite size effects in reaching coordination and in equilibrium selection in two-player coordination games.
We investigate three update rules -- the replicator dynamics (RD), the best response (BR), and the unconditional imitation (UI)
For the pure coordination game with two equivalent strategies we find a transition from a disordered state to a state of full coordination for a critical value of the network connectivity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the role of local effects and finite size effects in reaching
coordination and in equilibrium selection in different types of two-player
coordination games. We investigate three update rules -- the replicator
dynamics (RD), the best response (BR), and the unconditional imitation (UI) --
for coordination games on random graphs. Local effects turn out to me
significantly more important for the UI update rule. For the pure coordination
game with two equivalent strategies we find a transition from a disordered
state to a state of full coordination for a critical value of the network
connectivity. The transition is system-size-independent for the BR and RD
update rules. For the IU update rule it is system size dependent, but
coordination can always be reached below the connectivity of a complete graph.
We also consider the general coordination game which covers a range of games,
such as the stag hunt. For these games there is a payoff-dominant strategy and
a risk-dominant strategy with associated states of equilibrium coordination. We
analyse equilibrium selection analytically and numerically. For the RD and BR
update rules mean-field predictions agree with simulations and the
risk-dominant strategy is evolutionary favoured independently of local effects.
When players use the unconditional imitation, however, we observe coordination
in the payoff-dominant strategy. Surprisingly, the selection of pay-off
dominant equilibrium only occurs below a critical value of the network
connectivity and it disappears in complete graphs. As we show, it is a
combination of local effects and update rule that allows for coordination on
the payoff-dominant strategy.
Related papers
- On Tractable $Φ$-Equilibria in Non-Concave Games [53.212133025684224]
Non-concave games introduce significant game-theoretic and optimization challenges.
We show that when $Phi$ is finite, there exists an efficient uncoupled learning algorithm that converges to the corresponding $Phi$-equilibria.
We also show that Online Gradient Descent can efficiently approximate $Phi$-equilibria in non-trivial regimes.
arXiv Detail & Related papers (2024-03-13T01:51:30Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - On the Convergence of No-Regret Learning Dynamics in Time-Varying Games [89.96815099996132]
We characterize the convergence of optimistic gradient descent (OGD) in time-varying games.
Our framework yields sharp convergence bounds for the equilibrium gap of OGD in zero-sum games.
We also provide new insights on dynamic regret guarantees in static games.
arXiv Detail & Related papers (2023-01-26T17:25:45Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Network coevolution drives segregation and enhances Pareto optimal
equilibrium selection in coordination games [0.0]
We analyze a coevolution model that couples the changes in agents' actions with the network dynamics.
We find that both for RD and UI in a GCG, there is a regime of intermediate values of plasticity.
Coevolution enhances payoff-dominant equilibrium selection for both update rules.
arXiv Detail & Related papers (2022-11-22T09:33:02Z) - How Bad is Selfish Driving? Bounding the Inefficiency of Equilibria in
Urban Driving Games [64.71476526716668]
We study the (in)efficiency of any equilibrium players might agree to play.
We obtain guarantees that refine existing bounds on the Price of Anarchy.
Although the obtained guarantees concern open-loop trajectories, we observe efficient equilibria even when agents employ closed-loop policies.
arXiv Detail & Related papers (2022-10-24T09:32:40Z) - Multi-Agent Coordination in Adversarial Environments through Signal
Mediated Strategies [37.00818384785628]
Team members can coordinate their strategies before the beginning of the game, but are unable to communicate during the playing phase of the game.
In this setting, model-free RL methods are oftentimes unable to capture coordination because agents' policies are executed in a decentralized fashion.
We show convergence to coordinated equilibria in cases where previous state-of-the-art multi-agent RL algorithms did not.
arXiv Detail & Related papers (2021-02-09T18:44:16Z) - Independent Policy Gradient Methods for Competitive Reinforcement
Learning [62.91197073795261]
We obtain global, non-asymptotic convergence guarantees for independent learning algorithms in competitive reinforcement learning settings with two agents.
We show that if both players run policy gradient methods in tandem, their policies will converge to a min-max equilibrium of the game, as long as their learning rates follow a two-timescale rule.
arXiv Detail & Related papers (2021-01-11T23:20:42Z) - Hindsight and Sequential Rationality of Correlated Play [18.176128899338433]
We look at algorithms that ensure strong performance in hindsight relative to what could have been achieved with modified behavior.
We develop and advocate for this hindsight framing of learning in general sequential decision-making settings.
We present examples illustrating the distinct strengths and weaknesses of each type of equilibrium in the literature.
arXiv Detail & Related papers (2020-12-10T18:30:21Z) - Resolving Implicit Coordination in Multi-Agent Deep Reinforcement
Learning with Deep Q-Networks & Game Theory [0.0]
We address two major challenges of implicit coordination in deep reinforcement learning: non-stationarity and exponential growth of state-action space.
We demonstrate that knowledge of game type leads to an assumption of mirrored best responses and faster convergence than Nash-Q.
Inspired by the dueling network architecture, we learn both a single and joint agent representation, and merge them via element-wise addition.
arXiv Detail & Related papers (2020-12-08T17:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.