Evolution of Cooperative Hunting in Artificial Multi-layered Societies
- URL: http://arxiv.org/abs/2005.11580v5
- Date: Fri, 15 Jan 2021 19:43:23 GMT
- Title: Evolution of Cooperative Hunting in Artificial Multi-layered Societies
- Authors: Honglin Bao and Wolfgang Banzhaf
- Abstract summary: complexity of cooperative behavior is a crucial issue in multiagent-based social simulation.
In this paper, an agent-based model is proposed to study the evolution of cooperative hunting behaviors in an artificial society.
- Score: 3.270664282725826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The complexity of cooperative behavior is a crucial issue in multiagent-based
social simulation. In this paper, an agent-based model is proposed to study the
evolution of cooperative hunting behaviors in an artificial society. In this
model, the standard hunting game of stag is modified into a new situation with
social hierarchy and penalty. The agent society is divided into multiple layers
with supervisors and subordinates. In each layer, the society is divided into
multiple clusters. A supervisor controls all subordinates in a cluster locally.
Subordinates interact with rivals through reinforcement learning, and report
learning information to their corresponding supervisor. Supervisors process the
reported information through repeated affiliation-based aggregation and by
information exchange with other supervisors, then pass down the reprocessed
information to subordinates as guidance. Subordinates, in turn, update learning
information according to guidance, following the "win stay, lose shift"
strategy. Experiments are carried out to test the evolution of cooperation in
this closed-loop semi-supervised emergent system with different parameters. We
also study the variations and phase transitions in this game setting.
Related papers
- Emergent Dominance Hierarchies in Reinforcement Learning Agents [5.451419559128312]
Modern Reinforcement Learning (RL) algorithms are able to outperform humans in a wide variety of tasks.
We show that populations of RL agents can invent, learn, enforce, and transmit a dominance hierarchy to new populations.
The dominance hierarchies that emerge have a similar structure to those studied in chickens, mice, fish, and other species.
arXiv Detail & Related papers (2024-01-21T16:59:45Z) - Deconstructing Cooperation and Ostracism via Multi-Agent Reinforcement
Learning [3.3751859064985483]
We show that network rewiring facilitates mutual cooperation even when one agent always offers cooperation.
We also find that ostracism alone is not sufficient to make cooperation emerge.
Our findings provide insights into the conditions and mechanisms necessary for the emergence of cooperation.
arXiv Detail & Related papers (2023-10-06T23:18:55Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - Learning cooperative behaviours in adversarial multi-agent systems [2.355408272992293]
This work extends an existing virtual multi-agent platform called RoboSumo to create TripleSumo.
We investigate a scenario in which two agents, namely Bug' and Ant', must team up and push another agent Spider' out of the arena.
To tackle this goal, the newly added agent Bug' is trained during an ongoing match between Ant' and Spider'
arXiv Detail & Related papers (2023-02-10T22:12:29Z) - ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward [29.737986509769808]
We propose a self-supervised intrinsic reward ELIGN - expectation alignment.
Similar to how animals collaborate in a decentralized manner with those in their vicinity, agents trained with expectation alignment learn behaviors that match their neighbors' expectations.
We show that agent coordination improves through expectation alignment because agents learn to divide tasks amongst themselves, break coordination symmetries, and confuse adversaries.
arXiv Detail & Related papers (2022-10-09T22:24:44Z) - LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent
Reinforcement Learning [122.47938710284784]
We propose a novel framework for learning dynamic subtask assignment (LDSA) in cooperative MARL.
To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy.
We show that LDSA learns reasonable and effective subtask assignment for better collaboration.
arXiv Detail & Related papers (2022-05-05T10:46:16Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - Evolving Dyadic Strategies for a Cooperative Physical Task [0.0]
We evolve simulated agents to explore a space of feasible role-switching policies.
Applying these switching policies in a cooperative manual task, agents process visual and haptic cues to decide when to switch roles.
We find that the best performing dyads exhibit high temporal coordination (anti-synchrony)
arXiv Detail & Related papers (2020-04-22T13:23:12Z) - Hierarchically Decoupled Imitation for Morphological Transfer [95.19299356298876]
We show that transferring learned information from a morphologically simpler agent can massively improve the sample efficiency of a more complex one.
First, we show that incentivizing a complex agent's low-level to imitate a simpler agent's low-level significantly improves zero-shot high-level transfer.
Second, we show that KL-regularized training of the high level stabilizes learning and prevents mode-collapse.
arXiv Detail & Related papers (2020-03-03T18:56:49Z) - Multi-Agent Interactions Modeling with Correlated Policies [53.38338964628494]
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework.
We develop a Decentralized Adrial Imitation Learning algorithm with Correlated policies (CoDAIL)
Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators.
arXiv Detail & Related papers (2020-01-04T17:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.