Investigating Navigation Strategies in the Morris Water Maze through
Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2306.01066v2
- Date: Wed, 8 Nov 2023 03:08:57 GMT
- Title: Investigating Navigation Strategies in the Morris Water Maze through
Deep Reinforcement Learning
- Authors: Andrew Liu, Alla Borisyuk
- Abstract summary: In this work, we simulate the Morris Water Maze in 2D to train deep reinforcement learning agents.
We perform automatic classification of navigation strategies, analyze the distribution of strategies used by artificial agents, and compare them with experimental data to show similar learning dynamics as those seen in humans and rodents.
- Score: 4.408196554639971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Navigation is a complex skill with a long history of research in animals and
humans. In this work, we simulate the Morris Water Maze in 2D to train deep
reinforcement learning agents. We perform automatic classification of
navigation strategies, analyze the distribution of strategies used by
artificial agents, and compare them with experimental data to show similar
learning dynamics as those seen in humans and rodents. We develop
environment-specific auxiliary tasks and examine factors affecting their
usefulness. We suggest that the most beneficial tasks are potentially more
biologically feasible for real agents to use. Lastly, we explore the
development of internal representations in the activations of artificial agent
neural networks. These representations resemble place cells and head-direction
cells found in mouse brains, and their presence has correlation to the
navigation strategies that artificial agents employ.
Related papers
- A transformer-based deep reinforcement learning approach to spatial navigation in a partially observable Morris Water Maze [0.0]
This work applies a transformer-based architecture using deep reinforcement learning to navigate a 2D version of the Morris Water Maze.
We demonstrate that the proposed architecture enables the agent to efficiently learn spatial navigation strategies.
This work suggests promising avenues for future research in artificial agents whose behavior resembles that of biological agents.
arXiv Detail & Related papers (2024-10-01T13:22:56Z) - A Role of Environmental Complexity on Representation Learning in Deep Reinforcement Learning Agents [3.7314353481448337]
We developed a simulated navigation environment to train deep reinforcement learning agents.
We modulated the frequency of exposure to a shortcut and navigation cue, leading to the development of artificial agents with differing abilities.
We examined the encoded representations in artificial neural networks driving these agents, revealing intricate dynamics in representation learning.
arXiv Detail & Related papers (2024-07-03T18:27:26Z) - Emergence of Chemotactic Strategies with Multi-Agent Reinforcement Learning [1.9253333342733674]
We investigate whether reinforcement learning can provide insights into biological systems when trained to perform chemotaxis.
We run simulations covering a range of agent shapes, sizes, and swim speeds to determine if the physical constraints on biological swimmers, namely Brownian motion, lead to regions where reinforcement learners' training fails.
We find that RL agents can perform chemotaxis as soon as it is physically possible and, in some cases, even before the active swimming overpowers the environment.
arXiv Detail & Related papers (2024-04-02T14:42:52Z) - Learning Navigational Visual Representations with Semantic Map
Supervision [85.91625020847358]
We propose a navigational-specific visual representation learning method by contrasting the agent's egocentric views and semantic maps.
Ego$2$-Map learning transfers the compact and rich information from a map, such as objects, structure and transition, to the agent's egocentric representations for navigation.
arXiv Detail & Related papers (2023-07-23T14:01:05Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Generative Adversarial Neuroevolution for Control Behaviour Imitation [3.04585143845864]
We propose to explore whether deep neuroevolution can be used for behaviour imitation on popular simulation environments.
We introduce a simple co-evolutionary adversarial generation framework, and evaluate its capabilities by evolving standard deep recurrent networks.
Across all tasks, we find the final elite actor agents capable of achieving scores as high as those obtained by the pre-trained agents.
arXiv Detail & Related papers (2023-04-03T16:33:22Z) - Emergence of Maps in the Memories of Blind Navigation Agents [68.41901534985575]
Animal navigation research posits that organisms build and maintain internal spatial representations, or maps, of their environment.
We ask if machines -- specifically, artificial intelligence (AI) navigation agents -- also build implicit (or'mental') maps.
Unlike animal navigation, we can judiciously design the agent's perceptual system and control the learning paradigm to nullify alternative navigation mechanisms.
arXiv Detail & Related papers (2023-01-30T20:09:39Z) - Navigating to Objects in the Real World [76.1517654037993]
We present a large-scale empirical study of semantic visual navigation methods comparing methods from classical, modular, and end-to-end learning approaches.
We find that modular learning works well in the real world, attaining a 90% success rate.
In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.
arXiv Detail & Related papers (2022-12-02T01:10:47Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - What is Going on Inside Recurrent Meta Reinforcement Learning Agents? [63.58053355357644]
Recurrent meta reinforcement learning (meta-RL) agents are agents that employ a recurrent neural network (RNN) for the purpose of "learning a learning algorithm"
We shed light on the internal working mechanisms of these agents by reformulating the meta-RL problem using the Partially Observable Markov Decision Process (POMDP) framework.
arXiv Detail & Related papers (2021-04-29T20:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.