Brain-Like Replay Naturally Emerges in Reinforcement Learning Agents
- URL: http://arxiv.org/abs/2402.01467v1
- Date: Fri, 2 Feb 2024 14:55:51 GMT
- Title: Brain-Like Replay Naturally Emerges in Reinforcement Learning Agents
- Authors: Jiyi Wang, Likai Tang, Huimiao Chen, Sen Song
- Abstract summary: We discover naturally emergent replay under task-optimized paradigm using a recurrent neural network-based reinforcement learning model.
Our work provides a new avenue for understanding the mechanisms behind replay.
- Score: 4.603243771244471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Can replay, as a widely observed neural activity pattern in brain regions,
particularly in the hippocampus and neocortex, emerge in an artificial agent?
If yes, does it contribute to the tasks? In this work, without heavy dependence
on complex assumptions, we discover naturally emergent replay under
task-optimized paradigm using a recurrent neural network-based reinforcement
learning model, which mimics the hippocampus and prefrontal cortex, as well as
their intercommunication and the sensory cortex input. The emergent replay in
the hippocampus, which results from the episodic memory and cognitive map as
well as environment observations, well resembles animal experimental data and
serves as an effective indicator of high task performance. The model also
successfully reproduces local and nonlocal replay, which matches the human
experimental data. Our work provides a new avenue for understanding the
mechanisms behind replay.
Related papers
- A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - Learning Human Cognitive Appraisal Through Reinforcement Memory Unit [63.83306892013521]
We propose a memory-enhancing mechanism for recurrent neural networks that exploits the effect of human cognitive appraisal in sequential assessment tasks.
We conceptualize the memory-enhancing mechanism as Reinforcement Memory Unit (RMU) that contains an appraisal state together with two positive and negative reinforcement memories.
arXiv Detail & Related papers (2022-08-06T08:56:55Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Learning offline: memory replay in biological and artificial
reinforcement learning [1.0136215038345011]
We review the functional roles of replay in the fields of neuroscience and AI.
Replay is important for memory consolidation in biological neural networks.
It is also key to stabilising learning in deep neural networks.
arXiv Detail & Related papers (2021-09-21T08:57:19Z) - Replay in Deep Learning: Current Approaches and Missing Biological
Elements [33.20770284464084]
Replay is the reactivation of one or more neural patterns.
It is thought to play a critical role in memory formation, retrieval, and consolidation.
We provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks.
arXiv Detail & Related papers (2021-04-01T15:19:08Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - Episodic memory governs choices: An RNN-based reinforcement learning
model for decision-making task [24.96447960548042]
We develop an RNN-based Actor-Critic framework to solve two tasks analogous to the monkeys' decision-making tasks.
We try to explore an open question in neuroscience: which episodic memory in the hippocampus should be selected to ultimately govern future decisions.
arXiv Detail & Related papers (2021-01-24T04:33:07Z) - Association: Remind Your GAN not to Forget [11.653696510515807]
We propose a brain-like approach that imitates the associative learning process to achieve continual learning.
Experiments demonstrate the effectiveness of our method in alleviating catastrophic forgetting on image-to-image translation tasks.
arXiv Detail & Related papers (2020-11-27T04:43:15Z) - Noisy Agents: Self-supervised Exploration by Predicting Auditory Events [127.82594819117753]
We propose a novel type of intrinsic motivation for Reinforcement Learning (RL) that encourages the agent to understand the causal effect of its actions.
We train a neural network to predict the auditory events and use the prediction errors as intrinsic rewards to guide RL exploration.
Experimental results on Atari games show that our new intrinsic motivation significantly outperforms several state-of-the-art baselines.
arXiv Detail & Related papers (2020-07-27T17:59:08Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.