Agent-Based Simulation of Collective Cooperation: From Experiment to
Model
- URL: http://arxiv.org/abs/2005.12712v2
- Date: Wed, 7 Oct 2020 09:40:47 GMT
- Title: Agent-Based Simulation of Collective Cooperation: From Experiment to
Model
- Authors: Benedikt Kleinmeier, Gerta K\"oster, John Drury
- Abstract summary: We present an experiment to observe what happens when humans pass through a dense static crowd.
We derive a model that incorporates agents' perception and cognitive processing of a situation that needs cooperation.
Agents' ability to successfully get through a dense crowd emerges as an effect of the psychological model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulation models of pedestrian dynamics have become an invaluable tool for
evacuation planning. Typically crowds are assumed to stream unidirectionally
towards a safe area. Simulated agents avoid collisions through mechanisms that
belong to each individual, such as being repelled from each other by imaginary
forces. But classic locomotion models fail when collective cooperation is
called for, notably when an agent, say a first-aid attendant, needs to forge a
path through a densely packed group. We present a controlled experiment to
observe what happens when humans pass through a dense static crowd. We
formulate and test hypothesis on salient phenomena. We discuss our observations
in a psychological framework. We derive a model that incorporates: agents'
perception and cognitive processing of a situation that needs cooperation;
selection from a portfolio of behaviours, such as being cooperative; and a
suitable action, such as swapping places. Agents' ability to successfully get
through a dense crowd emerges as an effect of the psychological model.
Related papers
- Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots [58.720142291102135]
Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
arXiv Detail & Related papers (2023-02-20T04:52:24Z) - Group Cohesion in Multi-Agent Scenarios as an Emergent Behavior [0.0]
We show that imbuing agents with intrinsic needs for group affiliation, certainty and competence will lead to the emergence of social behavior among agents.
This behavior expresses itself in altruism toward in-group agents and adversarial tendencies toward out-group agents.
arXiv Detail & Related papers (2022-11-03T18:37:05Z) - Resonating Minds -- Emergent Collaboration Through Hierarchical Active
Inference [0.0]
We investigate how efficient, automatic coordination processes at the level of mental states (intentions, goals) can lead to collaborative situated problem-solving.
We present a model of hierarchical active inference for collaborative agents (HAICA)
We show that belief resonance and active inference allow for quick and efficient agent coordination, and thus can serve as a building block for collaborative cognitive agents.
arXiv Detail & Related papers (2021-12-02T13:23:44Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Deep reinforcement learning models the emergent dynamics of human
cooperation [13.425401489679583]
Experimental research has been unable to shed light on how social cognitive mechanisms contribute to the where and when of collective action.
We leverage multi-agent deep reinforcement learning to model how a social-cognitive mechanism--specifically, the intrinsic motivation to achieve a good reputation--steers group behavior.
arXiv Detail & Related papers (2021-03-08T18:58:40Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Development of swarm behavior in artificial learning agents that adapt
to different foraging environments [2.752817022620644]
We apply Projective Simulation to model each individual as an artificial learning agent.
We observe how different types of collective motion emerge depending on the distance the agents need to travel to reach the resources.
In addition, we study the properties of the individual trajectories that occur within the different types of emergent collective dynamics.
arXiv Detail & Related papers (2020-04-01T16:32:13Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z) - Loss aversion fosters coordination among independent reinforcement
learners [0.0]
We study what are the factors that can accelerate the emergence of collaborative behaviours among independent selfish learning agents.
We model two versions of the game with independent reinforcement learning agents.
We prove experimentally the introduction of loss aversion fosters cooperation by accelerating its appearance.
arXiv Detail & Related papers (2019-12-29T11:22:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.