Development of collective behavior in newborn artificial agents
- URL: http://arxiv.org/abs/2111.03796v1
- Date: Sat, 6 Nov 2021 03:46:31 GMT
- Title: Development of collective behavior in newborn artificial agents
- Authors: Donsuk Lee, Samantha M. W. Wood, Justin N. Wood
- Abstract summary: We use deep reinforcement learning and curiosity-driven learning to build newborn artificial agents that develop collective behavior.
Our agents learn collective behavior without external rewards, using only intrinsic motivation (curiosity) to drive learning.
This work bridges the divide between high-dimensional sensory inputs and collective action, resulting in a pixels-to-actions model of collective animal behavior.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collective behavior is widespread across the animal kingdom. To date,
however, the developmental and mechanistic foundations of collective behavior
have not been formally established. What learning mechanisms drive the
development of collective behavior in newborn animals? Here, we used deep
reinforcement learning and curiosity-driven learning -- two learning mechanisms
deeply rooted in psychological and neuroscientific research -- to build newborn
artificial agents that develop collective behavior. Like newborn animals, our
agents learn collective behavior from raw sensory inputs in naturalistic
environments. Our agents also learn collective behavior without external
rewards, using only intrinsic motivation (curiosity) to drive learning.
Specifically, when we raise our artificial agents in natural visual
environments with groupmates, the agents spontaneously develop ego-motion,
object recognition, and a preference for groupmates, rapidly learning all of
the core skills required for collective behavior. This work bridges the divide
between high-dimensional sensory inputs and collective action, resulting in a
pixels-to-actions model of collective animal behavior. More generally, we show
that two generic learning mechanisms -- deep reinforcement learning and
curiosity-driven learning -- are sufficient to learn collective behavior from
unsupervised natural experience.
Related papers
- The Role of Higher-Order Cognitive Models in Active Learning [8.847360368647752]
We advocate for a new paradigm for active learning for human feedback.
We discuss how increasing level of agency results in qualitatively different forms of rational communication between an active learning system and a teacher.
arXiv Detail & Related papers (2024-01-09T07:39:36Z) - Predator-prey survival pressure is sufficient to evolve swarming
behaviors [22.69193229479221]
We propose a minimal predator-prey coevolution framework based on mixed cooperative-competitive multiagent reinforcement learning.
Surprisingly, our analysis of this approach reveals an unexpectedly rich diversity of emergent behaviors for both prey and predators.
arXiv Detail & Related papers (2023-08-24T08:03:11Z) - Developmental Curiosity and Social Interaction in Virtual Agents [2.8894038270224858]
We create a virtual infant agent and place it in a developmentally-inspired 3D environment with no external rewards.
We test intrinsic reward functions that are similar to motivations that have been proposed to drive exploration in humans.
We find that learning a world model in the presence of an attentive caregiver helps the infant agent learn how to predict scenarios.
arXiv Detail & Related papers (2023-05-22T18:17:07Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots [58.720142291102135]
Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
arXiv Detail & Related papers (2023-02-20T04:52:24Z) - Predicting the long-term collective behaviour of fish pairs with deep learning [52.83927369492564]
This study introduces a deep learning model to assess social interactions in the fish species Hemigrammus rhodostomus.
We compare the results of our deep learning approach to experiments and to the results of a state-of-the-art analytical model.
We demonstrate that machine learning models social interactions can directly compete with their analytical counterparts in subtle experimental observables.
arXiv Detail & Related papers (2023-02-14T05:25:03Z) - Intrinsically Motivated Learning of Causal World Models [0.0]
A promising direction is to build world models capturing the true physical mechanisms hidden behind the sensorimotor interaction with the environment.
Inferring the causal structure of the environment could benefit from well-chosen actions as means to collect relevant interventional data.
arXiv Detail & Related papers (2022-08-09T16:48:28Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Deep reinforcement learning models the emergent dynamics of human
cooperation [13.425401489679583]
Experimental research has been unable to shed light on how social cognitive mechanisms contribute to the where and when of collective action.
We leverage multi-agent deep reinforcement learning to model how a social-cognitive mechanism--specifically, the intrinsic motivation to achieve a good reputation--steers group behavior.
arXiv Detail & Related papers (2021-03-08T18:58:40Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.