Development of swarm behavior in artificial learning agents that adapt
to different foraging environments
- URL: http://arxiv.org/abs/2004.00552v1
- Date: Wed, 1 Apr 2020 16:32:13 GMT
- Title: Development of swarm behavior in artificial learning agents that adapt
to different foraging environments
- Authors: Andrea L\'opez-Incera, Katja Ried, Thomas M\"uller, Hans J. Briegel
- Abstract summary: We apply Projective Simulation to model each individual as an artificial learning agent.
We observe how different types of collective motion emerge depending on the distance the agents need to travel to reach the resources.
In addition, we study the properties of the individual trajectories that occur within the different types of emergent collective dynamics.
- Score: 2.752817022620644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collective behavior, and swarm formation in particular, has been studied from
several perspectives within a large variety of fields, ranging from biology to
physics. In this work, we apply Projective Simulation to model each individual
as an artificial learning agent that interacts with its neighbors and
surroundings in order to make decisions and learn from them. Within a
reinforcement learning framework, we discuss one-dimensional learning scenarios
where agents need to get to food resources to be rewarded. We observe how
different types of collective motion emerge depending on the distance the
agents need to travel to reach the resources. For instance, strongly aligned
swarms emerge when the food source is placed far away from the region where
agents are situated initially. In addition, we study the properties of the
individual trajectories that occur within the different types of emergent
collective dynamics. Agents trained to find distant resources exhibit
individual trajectories with L\'evy-like characteristics as a consequence of
the collective motion, whereas agents trained to reach nearby resources present
Brownian-like trajectories.
Related papers
- Behavior-Inspired Neural Networks for Relational Inference [3.7219180084857473]
Recent works learn to categorize relationships between agents based on observations of their physical behavior.
We introduce a level of abstraction between the observable behavior of agents and the latent categories that determine their behavior.
We integrate the physical proximity of agents and their preferences in a nonlinear opinion dynamics model which provides a mechanism to identify mutually exclusive latent categories, predict an agent's evolution in time, and control an agent's physical behavior.
arXiv Detail & Related papers (2024-06-20T21:36:54Z) - Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots [58.720142291102135]
Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
arXiv Detail & Related papers (2023-02-20T04:52:24Z) - Behavioral Cloning via Search in Video PreTraining Latent Space [0.13999481573773073]
We formulate our control problem as a search problem over a dataset of experts' demonstrations.
We perform a proximity search over the BASALT MineRL-dataset in the latent representation of a Video PreTraining model.
The agent copies the actions from the expert trajectory as long as the distance between the state representations of the agent and the selected expert trajectory from the dataset do not diverge.
arXiv Detail & Related papers (2022-12-27T00:20:37Z) - Collaborative Training of Heterogeneous Reinforcement Learning Agents in
Environments with Sparse Rewards: What and When to Share? [7.489793155793319]
This work focuses on combining information obtained through intrinsic motivation with the aim of having a more efficient exploration and faster learning.
Our results reveal different ways in which a collaborative framework with little additional computational cost can outperform an independent learning process without knowledge sharing.
arXiv Detail & Related papers (2022-02-24T16:15:51Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.