Emergent communication enhances foraging behaviour in evolved swarms
controlled by Spiking Neural Networks
- URL: http://arxiv.org/abs/2212.08484v2
- Date: Fri, 8 Sep 2023 14:56:33 GMT
- Title: Emergent communication enhances foraging behaviour in evolved swarms
controlled by Spiking Neural Networks
- Authors: Cristian Jimenez Romero, Alper Yegenoglu, Aar\'on P\'erez Mart\'in,
Sandra Diaz-Pier, Abigail Morrison
- Abstract summary: Social insects such as ants communicate via pheromones which allows them to coordinate their activity and solve complex tasks as a swarm.
We use an evolutionary algorithm to optimize a spiking neural network (SNN) which serves as an artificial brain to control the behavior of each agent.
We observe that pheromone-based communication enables the ants to perform better in comparison to colonies where communication via pheromone did not emerge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social insects such as ants communicate via pheromones which allows them to
coordinate their activity and solve complex tasks as a swarm, e.g. foraging for
food. This behavior was shaped through evolutionary processes. In computational
models, self-coordination in swarms has been implemented using probabilistic or
simple action rules to shape the decision of each agent and the collective
behavior. However, manual tuned decision rules may limit the behavior of the
swarm. In this work we investigate the emergence of self-coordination and
communication in evolved swarms without defining any explicit rule. We evolve a
swarm of agents representing an ant colony. We use an evolutionary algorithm to
optimize a spiking neural network (SNN) which serves as an artificial brain to
control the behavior of each agent. The goal of the evolved colony is to find
optimal ways to forage for food and return it to the nest in the shortest
amount of time. In the evolutionary phase, the ants are able to learn to
collaborate by depositing pheromone near food piles and near the nest to guide
other ants. The pheromone usage is not manually encoded into the network;
instead, this behavior is established through the optimization procedure. We
observe that pheromone-based communication enables the ants to perform better
in comparison to colonies where communication via pheromone did not emerge. We
assess the foraging performance by comparing the SNN based model to a rule
based system. Our results show that the SNN based model can efficiently
complete the foraging task in a short amount of time. Our approach illustrates
self coordination via pheromone emerges as a result of the network
optimization. This work serves as a proof of concept for the possibility of
creating complex applications utilizing SNNs as underlying architectures for
multi-agent interactions where communication and self-coordination is desired.
Related papers
- Spiking Neural Networks as a Controller for Emergent Swarm Agents [8.816729033097868]
Existing research explores the possible emergent behaviors in swarms of robots with only a binary sensor and a simple but hand-picked controller structure.
This paper investigates the feasibility of training spiking neural networks to find those local interaction rules that result in particular emergent behaviors.
arXiv Detail & Related papers (2024-10-21T16:41:35Z) - A Simulation Environment for the Neuroevolution of Ant Colony Dynamics [0.0]
We introduce a simulation environment to facilitate research into emergent collective behaviour.
By leveraging real-world data, the environment simulates a target ant trail that a controllable agent must learn to replicate.
arXiv Detail & Related papers (2024-06-19T01:51:15Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Surprise-Adaptive Intrinsic Motivation for Unsupervised Reinforcement Learning [6.937243101289336]
entropy-minimizing and entropy-maximizing objectives for unsupervised reinforcement learning (RL) have been shown to be effective in different environments.
We propose an agent that can adapt its objective online, depending on the entropy conditions by framing the choice as a multi-armed bandit problem.
We demonstrate that such agents can learn to control entropy and exhibit emergent behaviors in both high- and low-entropy regimes.
arXiv Detail & Related papers (2024-05-27T14:58:24Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Leveraging Human Feedback to Evolve and Discover Novel Emergent
Behaviors in Robot Swarms [14.404339094377319]
We seek to leverage human input to automatically discover a taxonomy of collective behaviors that can emerge from a particular multi-agent system.
Our proposed approach adapts to user preferences by learning a similarity space over swarm collective behaviors.
We test our approach in simulation on two robot capability models and show that our methods consistently discover a richer set of emergent behaviors than prior work.
arXiv Detail & Related papers (2023-04-25T15:18:06Z) - Task-Agnostic Morphology Evolution [94.97384298872286]
Current approaches that co-adapt morphology and behavior use a specific task's reward as a signal for morphology optimization.
This often requires expensive policy optimization and results in task-dependent morphologies that are not built to generalize.
We propose a new approach, Task-Agnostic Morphology Evolution (TAME), to alleviate both of these issues.
arXiv Detail & Related papers (2021-02-25T18:59:21Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - Continuous Ant-Based Neural Topology Search [62.200941836913586]
This work introduces a novel, nature-inspired neural architecture search (NAS) algorithm based on ant colony optimization.
The Continuous Ant-based Neural Topology Search (CANTS) is strongly inspired by how ants move in the real world.
arXiv Detail & Related papers (2020-11-21T17:49:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.