An introduction to reinforcement learning for neuroscience
- URL: http://arxiv.org/abs/2311.07315v2
- Date: Thu, 1 Aug 2024 16:07:02 GMT
- Title: An introduction to reinforcement learning for neuroscience
- Authors: Kristopher T. Jensen,
- Abstract summary: Reinforcement learning has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal for temporal difference learning.
Recent work suggests that dopamine could implement a form of 'distributional reinforcement learning' popularized in deep learning.
- Score: 5.0401589279256065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal for temporal difference learning (Schultz et al., 1997) to recent work suggesting that dopamine could implement a form of 'distributional reinforcement learning' popularized in deep learning (Dabney et al., 2020). Throughout this literature, there has been a tight link between theoretical advances in reinforcement learning and neuroscientific experiments and findings. As a result, the theories describing our experimental data have become increasingly complex and difficult to navigate. In this review, we cover the basic theory underlying classical work in reinforcement learning and build up to an introductory overview of methods in modern deep reinforcement learning that have found applications in systems neuroscience. We start with an overview of the reinforcement learning problem and classical temporal difference algorithms, followed by a discussion of 'model-free' and 'model-based' reinforcement learning together with methods such as DYNA and successor representations that fall in between these two extremes. Throughout these sections, we highlight the close parallels between such machine learning methods and related work in both experimental and theoretical neuroscience. We then provide an introduction to deep reinforcement learning with examples of how these methods have been used to model different learning phenomena in systems neuroscience, such as meta-reinforcement learning (Wang et al., 2018) and distributional reinforcement learning (Dabney et al., 2020). Code that implements the methods discussed in this work and generates the figures is also provided.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Lifelong Reinforcement Learning via Neuromodulation [13.765526492965853]
Evolution has imbued animals and humans with highly effective adaptive learning functions and decision-making strategies.
Central to these theories and integrating evidence from neuroscience into learning is the neuromodulatory system.
arXiv Detail & Related papers (2024-08-15T22:53:35Z) - A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Curriculum effects and compositionality emerge with in-context learning in neural networks [15.744573869783972]
We show that networks capable of "in-context learning" (ICL) can reproduce human-like learning and compositional behavior on rule-governed tasks.
Our work shows how emergent ICL can equip neural networks with fundamentally different learning properties than those traditionally attributed to them.
arXiv Detail & Related papers (2024-02-13T18:55:27Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Mixture-of-Variational-Experts for Continual Learning [0.0]
We propose an optimality principle that facilitates a trade-off between learning and forgetting.
We propose a neural network layer for continual learning, called Mixture-of-Variational-Experts (MoVE)
Our experiments on variants of the MNIST and CIFAR10 datasets demonstrate the competitive performance of MoVE layers.
arXiv Detail & Related papers (2021-10-25T06:32:06Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Reinforcement Learning and its Connections with Neuroscience and
Psychology [0.0]
We review findings in both neuroscience and psychology that evidence reinforcement learning as a promising candidate for modeling learning and decision making in the brain.
We then discuss the implications of this observed relationship between RL, neuroscience and psychology and its role in advancing research in both AI and brain science.
arXiv Detail & Related papers (2020-06-25T04:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.