Network coevolution drives segregation and enhances Pareto optimal
equilibrium selection in coordination games
- URL: http://arxiv.org/abs/2211.12116v1
- Date: Tue, 22 Nov 2022 09:33:02 GMT
- Title: Network coevolution drives segregation and enhances Pareto optimal
equilibrium selection in coordination games
- Authors: Miguel A. Gonz\'alez Casado, Angel S\'anchez and Maxi San Miguel
- Abstract summary: We analyze a coevolution model that couples the changes in agents' actions with the network dynamics.
We find that both for RD and UI in a GCG, there is a regime of intermediate values of plasticity.
Coevolution enhances payoff-dominant equilibrium selection for both update rules.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work we assess the role played by the dynamical adaptation of the
interactions network, among agents playing Coordination Games, in reaching
global coordination and in the equilibrium selection. Specifically, we analyze
a coevolution model that couples the changes in agents' actions with the
network dynamics, so that while agents play the game, they are able to sever
some of their current connections and connect with others. We focus on two
update rules: Replicator Dynamics (RD) and Unconditional Imitation (UI). We
investigate a Pure Coordination Game (PCG), in which choices are equivalent,
and on a General Coordination Game (GCG), for which there is a risk-dominant
action and a payoff-dominant one. The network plasticity is measured by the
probability to rewire links. Changing this plasticity parameter, there is a
transition from a regime in which the system fully coordinates in a single
connected component to a regime in which the system fragments in two connected
components, each one coordinated on a different action (either if both actions
are equivalent or not). The nature of this fragmentation transition is
different for different update rules. Second, we find that both for RD and UI
in a GCG, there is a regime of intermediate values of plasticity, before the
fragmentation transition, for which the system is able to fully coordinate in a
single component network on the payoff-dominant action, i. e., coevolution
enhances payoff-dominant equilibrium selection for both update rules.
Related papers
- Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Plug-and-Play Regulators for Image-Text Matching [76.28522712930668]
Exploiting fine-grained correspondence and visual-semantic alignments has shown great potential in image-text matching.
We develop two simple but quite effective regulators which efficiently encode the message output to automatically contextualize and aggregate cross-modal representations.
Experiments on MSCOCO and Flickr30K datasets validate that they can bring an impressive and consistent R@1 gain on multiple models.
arXiv Detail & Related papers (2023-03-23T15:42:05Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Coordination and equilibrium selection in games: the role of local
effects [0.0]
We study the role of local effects and finite size effects in reaching coordination and in equilibrium selection in two-player coordination games.
We investigate three update rules -- the replicator dynamics (RD), the best response (BR), and the unconditional imitation (UI)
For the pure coordination game with two equivalent strategies we find a transition from a disordered state to a state of full coordination for a critical value of the network connectivity.
arXiv Detail & Related papers (2021-10-19T15:05:29Z) - X-volution: On the unification of convolution and self-attention [52.80459687846842]
We propose a multi-branch elementary module composed of both convolution and self-attention operation.
The proposed X-volution achieves highly competitive visual understanding improvements.
arXiv Detail & Related papers (2021-06-04T04:32:02Z) - Competing Adaptive Networks [56.56653763124104]
We develop an algorithm for decentralized competition among teams of adaptive agents.
We present an application in the decentralized training of generative adversarial neural networks.
arXiv Detail & Related papers (2021-03-29T14:42:15Z) - Multi-Agent Coordination in Adversarial Environments through Signal
Mediated Strategies [37.00818384785628]
Team members can coordinate their strategies before the beginning of the game, but are unable to communicate during the playing phase of the game.
In this setting, model-free RL methods are oftentimes unable to capture coordination because agents' policies are executed in a decentralized fashion.
We show convergence to coordinated equilibria in cases where previous state-of-the-art multi-agent RL algorithms did not.
arXiv Detail & Related papers (2021-02-09T18:44:16Z) - Resolving Implicit Coordination in Multi-Agent Deep Reinforcement
Learning with Deep Q-Networks & Game Theory [0.0]
We address two major challenges of implicit coordination in deep reinforcement learning: non-stationarity and exponential growth of state-action space.
We demonstrate that knowledge of game type leads to an assumption of mirrored best responses and faster convergence than Nash-Q.
Inspired by the dueling network architecture, we learn both a single and joint agent representation, and merge them via element-wise addition.
arXiv Detail & Related papers (2020-12-08T17:30:47Z) - Calibration of Shared Equilibria in General Sum Partially Observable
Markov Games [15.572157454411533]
We consider a general sum partially observable Markov game where agents of different types share a single policy network.
This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena of such equilibria to real-world targets.
arXiv Detail & Related papers (2020-06-23T15:14:20Z) - Evolving Dyadic Strategies for a Cooperative Physical Task [0.0]
We evolve simulated agents to explore a space of feasible role-switching policies.
Applying these switching policies in a cooperative manual task, agents process visual and haptic cues to decide when to switch roles.
We find that the best performing dyads exhibit high temporal coordination (anti-synchrony)
arXiv Detail & Related papers (2020-04-22T13:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.