An introduction to reinforcement learning for neuroscience
- URL: http://arxiv.org/abs/2311.07315v3
- Date: Wed, 18 Dec 2024 08:39:02 GMT
- Title: An introduction to reinforcement learning for neuroscience
- Authors: Kristopher T. Jensen,
- Abstract summary: Reinforcement learning has a rich history in neuroscience.<n>Deep reinforcement learning has led to new insights in neuroscience.<n>Code that implements the methods discussed and generates the figures is also provided.
- Score: 5.0401589279256065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning (RL) has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal (Schultz et al., 1997) to recent work proposing that the brain could implement a form of 'distributional reinforcement learning' popularized in machine learning (Dabney et al., 2020). There has been a close link between theoretical advances in reinforcement learning and neuroscience experiments throughout this literature, and the theories describing the experimental data have therefore become increasingly complex. Here, we provide an introduction and mathematical background to many of the methods that have been used in systems neroscience. We start with an overview of the RL problem and classical temporal difference algorithms, followed by a discussion of 'model-free', 'model-based', and intermediate RL algorithms. We then introduce deep reinforcement learning and discuss how this framework has led to new insights in neuroscience. This includes a particular focus on meta-reinforcement learning (Wang et al., 2018) and distributional RL (Dabney et al., 2020). Finally, we discuss potential shortcomings of the RL formalism for neuroscience and highlight open questions in the field. Code that implements the methods discussed and generates the figures is also provided.
Related papers
- Integrating Causality with Neurochaos Learning: Proposed Approach and Research Agenda [1.534667887016089]
We investigate how causal and neurochaos learning approaches can be integrated together to produce better results.
We propose an approach for this integration to enhance classification, prediction and reinforcement learning.
arXiv Detail & Related papers (2025-01-23T15:45:29Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Lifelong Reinforcement Learning via Neuromodulation [13.765526492965853]
Evolution has imbued animals and humans with highly effective adaptive learning functions and decision-making strategies.
Central to these theories and integrating evidence from neuroscience into learning is the neuromodulatory system.
arXiv Detail & Related papers (2024-08-15T22:53:35Z) - A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Curriculum effects and compositionality emerge with in-context learning in neural networks [15.744573869783972]
We show that networks capable of "in-context learning" (ICL) can reproduce human-like learning and compositional behavior on rule-governed tasks.
Our work shows how emergent ICL can equip neural networks with fundamentally different learning properties than those traditionally attributed to them.
arXiv Detail & Related papers (2024-02-13T18:55:27Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Neuro-Nav: A Library for Neurally-Plausible Reinforcement Learning [2.060642030400714]
We propose Neuro-Nav, an open-source library for neurally plausible reinforcement learning (RL)
Neuro-Nav offers a set of standardized environments and RL algorithms drawn from canonical behavioral and neural studies in rodents and humans.
We demonstrate that the toolkit replicates relevant findings from a number of studies across both cognitive science and RL literatures.
arXiv Detail & Related papers (2022-06-06T16:33:36Z) - Mixture-of-Variational-Experts for Continual Learning [0.0]
We propose an optimality principle that facilitates a trade-off between learning and forgetting.
We propose a neural network layer for continual learning, called Mixture-of-Variational-Experts (MoVE)
Our experiments on variants of the MNIST and CIFAR10 datasets demonstrate the competitive performance of MoVE layers.
arXiv Detail & Related papers (2021-10-25T06:32:06Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Deep Reinforcement Learning and its Neuroscientific Implications [19.478332877763417]
The emergence of powerful artificial intelligence is defining new research directions in neuroscience.
Deep reinforcement learning (Deep RL) offers a framework for studying the interplay among learning, representation and decision-making.
Deep RL offers a new set of research tools and a wide range of novel hypotheses.
arXiv Detail & Related papers (2020-07-07T19:27:54Z) - Reinforcement Learning and its Connections with Neuroscience and
Psychology [0.0]
We review findings in both neuroscience and psychology that evidence reinforcement learning as a promising candidate for modeling learning and decision making in the brain.
We then discuss the implications of this observed relationship between RL, neuroscience and psychology and its role in advancing research in both AI and brain science.
arXiv Detail & Related papers (2020-06-25T04:29:15Z) - Artificial neural networks for neuroscientists: A primer [4.771833920251869]
Artificial neural networks (ANNs) are essential tools in machine learning that have drawn increasing attention in neuroscience.
In this pedagogical Primer, we introduce ANNs and demonstrate how they have been fruitfully deployed to study neuroscientific questions.
With a focus on bringing this mathematical framework closer to neurobiology, we detail how to customize the analysis, structure, and learning of ANNs.
arXiv Detail & Related papers (2020-06-01T15:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.