An introduction to reinforcement learning for neuroscience
- URL: http://arxiv.org/abs/2311.07315v1
- Date: Mon, 13 Nov 2023 13:10:52 GMT
- Title: An introduction to reinforcement learning for neuroscience
- Authors: Kristopher T. Jensen
- Abstract summary: Reinforcement learning has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal for temporal difference learning.
Recent work suggests that dopamine could implement a form of 'distributional reinforcement learning' popularized in deep learning.
- Score: 5.0401589279256065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning has a rich history in neuroscience, from early work on
dopamine as a reward prediction error signal for temporal difference learning
(Schultz et al., 1997) to recent work suggesting that dopamine could implement
a form of 'distributional reinforcement learning' popularized in deep learning
(Dabney et al., 2020). Throughout this literature, there has been a tight link
between theoretical advances in reinforcement learning and neuroscientific
experiments and findings. As a result, the theories describing our experimental
data have become increasingly complex and difficult to navigate. In this
review, we cover the basic theory underlying classical work in reinforcement
learning and build up to an introductory overview of methods used in modern
deep reinforcement learning that have found applications in systems
neuroscience. We start with an overview of the reinforcement learning problem
and classical temporal difference algorithms, followed by a discussion of
'model-free' and 'model-based' reinforcement learning together with methods
such as DYNA and successor representations that fall in between these two
categories. Throughout these sections, we highlight the close parallels between
the machine learning methods and related work in both experimental and
theoretical neuroscience. We then provide an introduction to deep reinforcement
learning with examples of how these methods have been used to model different
learning phenomena in the systems neuroscience literature, such as
meta-reinforcement learning (Wang et al., 2018) and distributional
reinforcement learning (Dabney et al., 2020). Code that implements the methods
discussed in this work and generates the figures is also provided.
Related papers
- A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Loss Dynamics of Temporal Difference Reinforcement Learning [36.772501199987076]
We study the case learning curves for temporal difference learning of a value function with linear function approximators.
We study how learning dynamics and plateaus depend on feature structure, learning rate, discount factor, and reward function.
arXiv Detail & Related papers (2023-07-10T18:17:50Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Context sequence theory: a common explanation for multiple types of
learning [0.0]
We propose the context sequence theory to give a common explanation for multiple types of learning in mammals.
We hope that can provide a new insight into the construct of machine learning models.
arXiv Detail & Related papers (2022-07-17T12:51:52Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Mixture-of-Variational-Experts for Continual Learning [0.0]
We propose an optimality principle that facilitates a trade-off between learning and forgetting.
We propose a neural network layer for continual learning, called Mixture-of-Variational-Experts (MoVE)
Our experiments on variants of the MNIST and CIFAR10 datasets demonstrate the competitive performance of MoVE layers.
arXiv Detail & Related papers (2021-10-25T06:32:06Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Reinforcement Learning and its Connections with Neuroscience and
Psychology [0.0]
We review findings in both neuroscience and psychology that evidence reinforcement learning as a promising candidate for modeling learning and decision making in the brain.
We then discuss the implications of this observed relationship between RL, neuroscience and psychology and its role in advancing research in both AI and brain science.
arXiv Detail & Related papers (2020-06-25T04:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.