Predator-prey survival pressure is sufficient to evolve swarming
behaviors
- URL: http://arxiv.org/abs/2308.12624v1
- Date: Thu, 24 Aug 2023 08:03:11 GMT
- Title: Predator-prey survival pressure is sufficient to evolve swarming
behaviors
- Authors: Jianan Li, Liang Li, Shiyu Zhao
- Abstract summary: We propose a minimal predator-prey coevolution framework based on mixed cooperative-competitive multiagent reinforcement learning.
Surprisingly, our analysis of this approach reveals an unexpectedly rich diversity of emergent behaviors for both prey and predators.
- Score: 22.69193229479221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The comprehension of how local interactions arise in global collective
behavior is of utmost importance in both biological and physical research.
Traditional agent-based models often rely on static rules that fail to capture
the dynamic strategies of the biological world. Reinforcement learning has been
proposed as a solution, but most previous methods adopt handcrafted reward
functions that implicitly or explicitly encourage the emergence of swarming
behaviors. In this study, we propose a minimal predator-prey coevolution
framework based on mixed cooperative-competitive multiagent reinforcement
learning, and adopt a reward function that is solely based on the fundamental
survival pressure, that is, prey receive a reward of $-1$ if caught by
predators while predators receive a reward of $+1$. Surprisingly, our analysis
of this approach reveals an unexpectedly rich diversity of emergent behaviors
for both prey and predators, including flocking and swirling behaviors for
prey, as well as dispersion tactics, confusion, and marginal predation
phenomena for predators. Overall, our study provides novel insights into the
collective behavior of organisms and highlights the potential applications in
swarm robotics.
Related papers
- Emergent Collective Reproduction via Evolving Neuronal Flocks [0.0]
This study facilitates the understanding of evolutionary transitions in individuality (ETIs) through a novel artificial life framework, named VitaNova.
VitaNova intricately merges self-organization and natural selection to simulate the emergence of complex, reproductive groups.
arXiv Detail & Related papers (2024-09-20T06:22:24Z) - Leveraging Human Feedback to Evolve and Discover Novel Emergent
Behaviors in Robot Swarms [14.404339094377319]
We seek to leverage human input to automatically discover a taxonomy of collective behaviors that can emerge from a particular multi-agent system.
Our proposed approach adapts to user preferences by learning a similarity space over swarm collective behaviors.
We test our approach in simulation on two robot capability models and show that our methods consistently discover a richer set of emergent behaviors than prior work.
arXiv Detail & Related papers (2023-04-25T15:18:06Z) - Predicting the long-term collective behaviour of fish pairs with deep learning [52.83927369492564]
This study introduces a deep learning model to assess social interactions in the fish species Hemigrammus rhodostomus.
We compare the results of our deep learning approach to experiments and to the results of a state-of-the-art analytical model.
We demonstrate that machine learning models social interactions can directly compete with their analytical counterparts in subtle experimental observables.
arXiv Detail & Related papers (2023-02-14T05:25:03Z) - Learning Complex Spatial Behaviours in ABM: An Experimental
Observational Study [0.0]
This paper explores how Reinforcement Learning can be applied to create emergent agent behaviours.
Running a series of simulations, we demonstrate that agents trained using the novel Proximal Policy optimisation algorithm behave in ways that exhibit properties of real-world intelligent adaptive behaviours.
arXiv Detail & Related papers (2022-01-04T11:56:11Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z) - Development of collective behavior in newborn artificial agents [0.0]
We use deep reinforcement learning and curiosity-driven learning to build newborn artificial agents that develop collective behavior.
Our agents learn collective behavior without external rewards, using only intrinsic motivation (curiosity) to drive learning.
This work bridges the divide between high-dimensional sensory inputs and collective action, resulting in a pixels-to-actions model of collective animal behavior.
arXiv Detail & Related papers (2021-11-06T03:46:31Z) - Adversarial Visual Robustness by Causal Intervention [56.766342028800445]
Adversarial training is the de facto most promising defense against adversarial examples.
Yet, its passive nature inevitably prevents it from being immune to unknown attackers.
We provide a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning.
arXiv Detail & Related papers (2021-06-17T14:23:54Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - To mock a Mocking bird : Studies in Biomimicry [0.342658286826597]
This paper dwells on certain novel game-theoretic investigations in bio-mimicry.
The model is used to study the situation where multi-armed bandit predators with zero prior information are introduced into the ecosystem.
arXiv Detail & Related papers (2021-04-26T09:55:40Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.