Deep Reinforcement Learning and its Neuroscientific Implications
- URL: http://arxiv.org/abs/2007.03750v1
- Date: Tue, 7 Jul 2020 19:27:54 GMT
- Title: Deep Reinforcement Learning and its Neuroscientific Implications
- Authors: Matthew Botvinick, Jane X. Wang, Will Dabney, Kevin J. Miller, Zeb
Kurth-Nelson
- Abstract summary: The emergence of powerful artificial intelligence is defining new research directions in neuroscience.
Deep reinforcement learning (Deep RL) offers a framework for studying the interplay among learning, representation and decision-making.
Deep RL offers a new set of research tools and a wide range of novel hypotheses.
- Score: 19.478332877763417
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence of powerful artificial intelligence is defining new research
directions in neuroscience. To date, this research has focused largely on deep
neural networks trained using supervised learning, in tasks such as image
classification. However, there is another area of recent AI work which has so
far received less attention from neuroscientists, but which may have profound
neuroscientific implications: deep reinforcement learning. Deep RL offers a
comprehensive framework for studying the interplay among learning,
representation and decision-making, offering to the brain sciences a new set of
research tools and a wide range of novel hypotheses. In the present review, we
provide a high-level introduction to deep RL, discuss some of its initial
applications to neuroscience, and survey its wider implications for research on
brain and behavior, concluding with a list of opportunities for next-stage
research.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Towards Data-and Knowledge-Driven Artificial Intelligence: A Survey on Neuro-Symbolic Computing [73.0977635031713]
Neural-symbolic computing (NeSy) has been an active research area of Artificial Intelligence (AI) for many years.
NeSy shows promise of reconciling the advantages of reasoning and interpretability of symbolic representation and robust learning in neural networks.
arXiv Detail & Related papers (2022-10-28T04:38:10Z) - Neuro-Nav: A Library for Neurally-Plausible Reinforcement Learning [2.060642030400714]
We propose Neuro-Nav, an open-source library for neurally plausible reinforcement learning (RL)
Neuro-Nav offers a set of standardized environments and RL algorithms drawn from canonical behavioral and neural studies in rodents and humans.
We demonstrate that the toolkit replicates relevant findings from a number of studies across both cognitive science and RL literatures.
arXiv Detail & Related papers (2022-06-06T16:33:36Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - CCN GAC Workshop: Issues with learning in biological recurrent neural
networks [11.725061054663872]
This perspective piece came about through the Generative Adversarial Collaboration (GAC) series of workshops organized by the Computational Cognitive Neuroscience (CCN) conference in 2020.
We will give a brief review of the common assumptions about biological learning and the corresponding findings from experimental neuroscience.
We will then outline the key issues discussed in the workshop: synaptic plasticity, neural circuits, theory-experiment divide, and objective functions.
arXiv Detail & Related papers (2021-05-12T00:59:40Z) - Probing artificial neural networks: insights from neuroscience [6.7832320606111125]
Neuroscience has paved the way in using such models through numerous studies conducted in recent decades.
We argue that specific research goals play a paramount role when designing a probe and encourage future probing studies to be explicit in stating these goals.
arXiv Detail & Related papers (2021-04-16T16:13:23Z) - Reinforcement Learning and its Connections with Neuroscience and
Psychology [0.0]
We review findings in both neuroscience and psychology that evidence reinforcement learning as a promising candidate for modeling learning and decision making in the brain.
We then discuss the implications of this observed relationship between RL, neuroscience and psychology and its role in advancing research in both AI and brain science.
arXiv Detail & Related papers (2020-06-25T04:29:15Z) - On Interpretability of Artificial Neural Networks: A Survey [21.905647127437685]
We systematically review recent studies in understanding the mechanism of neural networks, describe applications of interpretability especially in medicine.
We discuss future directions of interpretability research, such as in relation to fuzzy logic and brain science.
arXiv Detail & Related papers (2020-01-08T13:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.