An introduction to reinforcement learning for neuroscience
- URL: http://arxiv.org/abs/2311.07315v3
- Date: Wed, 18 Dec 2024 08:39:02 GMT
- Title: An introduction to reinforcement learning for neuroscience
- Authors: Kristopher T. Jensen,
- Abstract summary: Reinforcement learning has a rich history in neuroscience.
Deep reinforcement learning has led to new insights in neuroscience.
Code that implements the methods discussed and generates the figures is also provided.
- Score: 5.0401589279256065
- License:
- Abstract: Reinforcement learning (RL) has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal (Schultz et al., 1997) to recent work proposing that the brain could implement a form of 'distributional reinforcement learning' popularized in machine learning (Dabney et al., 2020). There has been a close link between theoretical advances in reinforcement learning and neuroscience experiments throughout this literature, and the theories describing the experimental data have therefore become increasingly complex. Here, we provide an introduction and mathematical background to many of the methods that have been used in systems neroscience. We start with an overview of the RL problem and classical temporal difference algorithms, followed by a discussion of 'model-free', 'model-based', and intermediate RL algorithms. We then introduce deep reinforcement learning and discuss how this framework has led to new insights in neuroscience. This includes a particular focus on meta-reinforcement learning (Wang et al., 2018) and distributional RL (Dabney et al., 2020). Finally, we discuss potential shortcomings of the RL formalism for neuroscience and highlight open questions in the field. Code that implements the methods discussed and generates the figures is also provided.
Related papers
- Integrating Causality with Neurochaos Learning: Proposed Approach and Research Agenda [1.534667887016089]
We investigate how causal and neurochaos learning approaches can be integrated together to produce better results.
We propose an approach for this integration to enhance classification, prediction and reinforcement learning.
arXiv Detail & Related papers (2025-01-23T15:45:29Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Lifelong Reinforcement Learning via Neuromodulation [13.765526492965853]
Evolution has imbued animals and humans with highly effective adaptive learning functions and decision-making strategies.
Central to these theories and integrating evidence from neuroscience into learning is the neuromodulatory system.
arXiv Detail & Related papers (2024-08-15T22:53:35Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Neuro-Nav: A Library for Neurally-Plausible Reinforcement Learning [2.060642030400714]
We propose Neuro-Nav, an open-source library for neurally plausible reinforcement learning (RL)
Neuro-Nav offers a set of standardized environments and RL algorithms drawn from canonical behavioral and neural studies in rodents and humans.
We demonstrate that the toolkit replicates relevant findings from a number of studies across both cognitive science and RL literatures.
arXiv Detail & Related papers (2022-06-06T16:33:36Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Deep Reinforcement Learning and its Neuroscientific Implications [19.478332877763417]
The emergence of powerful artificial intelligence is defining new research directions in neuroscience.
Deep reinforcement learning (Deep RL) offers a framework for studying the interplay among learning, representation and decision-making.
Deep RL offers a new set of research tools and a wide range of novel hypotheses.
arXiv Detail & Related papers (2020-07-07T19:27:54Z) - Artificial neural networks for neuroscientists: A primer [4.771833920251869]
Artificial neural networks (ANNs) are essential tools in machine learning that have drawn increasing attention in neuroscience.
In this pedagogical Primer, we introduce ANNs and demonstrate how they have been fruitfully deployed to study neuroscientific questions.
With a focus on bringing this mathematical framework closer to neurobiology, we detail how to customize the analysis, structure, and learning of ANNs.
arXiv Detail & Related papers (2020-06-01T15:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.