Modelling Cooperation in Network Games with Spatio-Temporal Complexity
- URL: http://arxiv.org/abs/2102.06911v1
- Date: Sat, 13 Feb 2021 12:04:52 GMT
- Title: Modelling Cooperation in Network Games with Spatio-Temporal Complexity
- Authors: Michiel A. Bakker, Richard Everett, Laura Weidinger, Iason Gabriel,
William S. Isaac, Joel Z. Leibo, Edward Hughes
- Abstract summary: We study the emergence of self-organized cooperation in complex gridworld domains.
Using multi-agent deep reinforcement learning, we simulate an agent society for a variety of plausible mechanisms.
Our methods have implications for mechanism design in both human and artificial agent systems.
- Score: 11.665246332943058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The real world is awash with multi-agent problems that require collective
action by self-interested agents, from the routing of packets across a computer
network to the management of irrigation systems. Such systems have local
incentives for individuals, whose behavior has an impact on the global outcome
for the group. Given appropriate mechanisms describing agent interaction,
groups may achieve socially beneficial outcomes, even in the face of short-term
selfish incentives. In many cases, collective action problems possess an
underlying graph structure, whose topology crucially determines the
relationship between local decisions and emergent global effects. Such
scenarios have received great attention through the lens of network games.
However, this abstraction typically collapses important dimensions, such as
geometry and time, relevant to the design of mechanisms promoting cooperation.
In parallel work, multi-agent deep reinforcement learning has shown great
promise in modelling the emergence of self-organized cooperation in complex
gridworld domains. Here we apply this paradigm in graph-structured collective
action problems. Using multi-agent deep reinforcement learning, we simulate an
agent society for a variety of plausible mechanisms, finding clear transitions
between different equilibria over time. We define analytic tools inspired by
related literatures to measure the social outcomes, and use these to draw
conclusions about the efficacy of different environmental interventions. Our
methods have implications for mechanism design in both human and artificial
agent systems.
Related papers
- Evolving Neural Networks Reveal Emergent Collective Behavior from Minimal Agent Interactions [0.0]
We investigate how neural networks evolve to control agents' behavior in a dynamic environment.
Simpler behaviors, such as lane formation and laminar flow, are characterized by more linear network operations.
Specific environmental parameters, such as moderate noise, broader field of view, and lower agent density, promote the evolution of non-linear networks.
arXiv Detail & Related papers (2024-10-25T17:43:00Z) - Navigating the swarm: Deep neural networks command emergent behaviours [2.7059353835118602]
We show that it is possible to generate coordinated structures in collective behavior with intended global patterns by fine-tuning an inter-agent interaction rule.
Our strategy employs deep neural networks, obeying the laws of dynamics, to find interaction rules that command desired structures.
Our findings pave the way for new applications in robotic swarm operations, active matter organisation, and for the uncovering of obscure interaction rules in biological systems.
arXiv Detail & Related papers (2024-07-16T02:46:11Z) - Behavior-Inspired Neural Networks for Relational Inference [3.7219180084857473]
Recent works learn to categorize relationships between agents based on observations of their physical behavior.
We introduce a level of abstraction between the observable behavior of agents and the latent categories that determine their behavior.
We integrate the physical proximity of agents and their preferences in a nonlinear opinion dynamics model which provides a mechanism to identify mutually exclusive latent categories, predict an agent's evolution in time, and control an agent's physical behavior.
arXiv Detail & Related papers (2024-06-20T21:36:54Z) - Problem-Solving in Language Model Networks [44.99833362998488]
This work extends the concept of multi-agent debate to more general network topologies.
It measures the question-answering accuracy, influence, consensus, and the effects of bias on the collective.
arXiv Detail & Related papers (2024-06-18T07:59:14Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning [58.84311336011451]
We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
arXiv Detail & Related papers (2024-05-03T04:12:19Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning [54.55119659523629]
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
arXiv Detail & Related papers (2020-10-15T14:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.