Emergent behavior and neural dynamics in artificial agents tracking
turbulent plumes
- URL: http://arxiv.org/abs/2109.12434v1
- Date: Sat, 25 Sep 2021 20:57:02 GMT
- Title: Emergent behavior and neural dynamics in artificial agents tracking
turbulent plumes
- Authors: Satpreet Harcharan Singh, Floris van Breugel, Rajesh P. N. Rao, Bingni
Wen Brunton
- Abstract summary: We use deep reinforcement learning to train recurrent neural network (RNN) agents to locate the source of simulated turbulent plumes.
Our analyses suggest an intriguing experimentally testable hypothesis for tracking plumes in changing wind direction.
- Score: 1.8065361710947974
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Tracking a turbulent plume to locate its source is a complex control problem
because it requires multi-sensory integration and must be robust to
intermittent odors, changing wind direction, and variable plume statistics.
This task is routinely performed by flying insects, often over long distances,
in pursuit of food or mates. Several aspects of this remarkable behavior have
been studied in detail in many experimental studies. Here, we take a
complementary in silico approach, using artificial agents trained with
reinforcement learning to develop an integrated understanding of the behaviors
and neural computations that support plume tracking. Specifically, we use deep
reinforcement learning (DRL) to train recurrent neural network (RNN) agents to
locate the source of simulated turbulent plumes. Interestingly, the agents'
emergent behaviors resemble those of flying insects, and the RNNs learn to
represent task-relevant variables, such as head direction and time since last
odor encounter. Our analyses suggest an intriguing experimentally testable
hypothesis for tracking plumes in changing wind direction -- that agents follow
local plume shape rather than the current wind direction. While reflexive
short-memory behaviors are sufficient for tracking plumes in constant wind,
longer timescales of memory are essential for tracking plumes that switch
direction. At the level of neural dynamics, the RNNs' population activity is
low-dimensional and organized into distinct dynamical structures, with some
correspondence to behavioral modules. Our in silico approach provides key
intuitions for turbulent plume tracking strategies and motivates future
targeted experimental and theoretical developments.
Related papers
- Dynamic Reinforcement Learning for Actors [0.0]
Dynamic Reinforcement Learning (Dynamic RL) directly controls system dynamics, instead of the actor (action-generating neural network) outputs at each moment.
Actor is initially designed to generate chaotic dynamics through the loop with its environment.
Dynamic RL controls global system dynamics using a local index called "sensitivity"
arXiv Detail & Related papers (2025-02-14T14:50:05Z) - A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation [16.696524554516294]
We develop a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator.
Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN.
Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
arXiv Detail & Related papers (2024-04-08T20:42:10Z) - Towards Deviation-Robust Agent Navigation via Perturbation-Aware
Contrastive Learning [125.61772424068903]
Vision-and-language navigation (VLN) asks an agent to follow a given language instruction to navigate through a real 3D environment.
We present a model-agnostic training paradigm, called Progressive Perturbation-aware Contrastive Learning (PROPER) to enhance the generalization ability of existing VLN agents.
arXiv Detail & Related papers (2024-03-09T02:34:13Z) - Investigating Navigation Strategies in the Morris Water Maze through
Deep Reinforcement Learning [4.408196554639971]
In this work, we simulate the Morris Water Maze in 2D to train deep reinforcement learning agents.
We perform automatic classification of navigation strategies, analyze the distribution of strategies used by artificial agents, and compare them with experimental data to show similar learning dynamics as those seen in humans and rodents.
arXiv Detail & Related papers (2023-06-01T18:16:16Z) - From Data-Fitting to Discovery: Interpreting the Neural Dynamics of
Motor Control through Reinforcement Learning [3.6159844753873087]
We study structured neural activity of a virtual robot performing legged locomotion.
We find that embodied agents trained to walk exhibit smooth dynamics that avoid tangling -- or opposing neural trajectories in neighboring neural space.
arXiv Detail & Related papers (2023-05-18T16:52:27Z) - Generative Adversarial Neuroevolution for Control Behaviour Imitation [3.04585143845864]
We propose to explore whether deep neuroevolution can be used for behaviour imitation on popular simulation environments.
We introduce a simple co-evolutionary adversarial generation framework, and evaluate its capabilities by evolving standard deep recurrent networks.
Across all tasks, we find the final elite actor agents capable of achieving scores as high as those obtained by the pre-trained agents.
arXiv Detail & Related papers (2023-04-03T16:33:22Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - What is Going on Inside Recurrent Meta Reinforcement Learning Agents? [63.58053355357644]
Recurrent meta reinforcement learning (meta-RL) agents are agents that employ a recurrent neural network (RNN) for the purpose of "learning a learning algorithm"
We shed light on the internal working mechanisms of these agents by reformulating the meta-RL problem using the Partially Observable Markov Decision Process (POMDP) framework.
arXiv Detail & Related papers (2021-04-29T20:34:39Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - Adaptive Rational Activations to Boost Deep Reinforcement Learning [68.10769262901003]
We motivate why rationals are suitable for adaptable activation functions and why their inclusion into neural networks is crucial.
We demonstrate that equipping popular algorithms with (recurrent-)rational activations leads to consistent improvements on Atari games.
arXiv Detail & Related papers (2021-02-18T14:53:12Z) - Tracking Emotions: Intrinsic Motivation Grounded on Multi-Level
Prediction Error Dynamics [68.8204255655161]
We discuss how emotions arise when differences between expected and actual rates of progress towards a goal are experienced.
We present an intrinsic motivation architecture that generates behaviors towards self-generated and dynamic goals.
arXiv Detail & Related papers (2020-07-29T06:53:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.