CCN GAC Workshop: Issues with learning in biological recurrent neural
networks
- URL: http://arxiv.org/abs/2105.05382v1
- Date: Wed, 12 May 2021 00:59:40 GMT
- Title: CCN GAC Workshop: Issues with learning in biological recurrent neural
networks
- Authors: Luke Y. Prince, Ellen Boven, Roy Henha Eyono, Arna Ghosh, Joe
Pemberton, Franz Scherr, Claudia Clopath, Rui Ponte Costa, Wolfgang Maass,
Blake A. Richards, Cristina Savin, Katharina Anna Wilmes
- Abstract summary: This perspective piece came about through the Generative Adversarial Collaboration (GAC) series of workshops organized by the Computational Cognitive Neuroscience (CCN) conference in 2020.
We will give a brief review of the common assumptions about biological learning and the corresponding findings from experimental neuroscience.
We will then outline the key issues discussed in the workshop: synaptic plasticity, neural circuits, theory-experiment divide, and objective functions.
- Score: 11.725061054663872
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This perspective piece came about through the Generative Adversarial
Collaboration (GAC) series of workshops organized by the Computational
Cognitive Neuroscience (CCN) conference in 2020. We brought together a number
of experts from the field of theoretical neuroscience to debate emerging issues
in our understanding of how learning is implemented in biological recurrent
neural networks. Here, we will give a brief review of the common assumptions
about biological learning and the corresponding findings from experimental
neuroscience and contrast them with the efficiency of gradient-based learning
in recurrent neural networks commonly used in artificial intelligence. We will
then outline the key issues discussed in the workshop: synaptic plasticity,
neural circuits, theory-experiment divide, and objective functions. Finally, we
conclude with recommendations for both theoretical and experimental
neuroscientists when designing new studies that could help to bring clarity to
these issues.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Probing Biological and Artificial Neural Networks with Task-dependent
Neural Manifolds [12.037840490243603]
We investigate the internal mechanisms of neural networks through the lens of neural population geometry.
We quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models.
These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry.
arXiv Detail & Related papers (2023-12-21T20:40:51Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Neural Networks from Biological to Artificial and Vice Versa [6.85316573653194]
Key contribution this paper is the investigation of the impact of a dead neuron on the performance of artificial neural networks (ANNs)
The aim is to assess the potential application of the findings in the biological domain, the expected results may have significant implications for the development of effective treatment strategies for neurological disorders.
arXiv Detail & Related papers (2023-06-05T17:30:07Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Deep Reinforcement Learning and its Neuroscientific Implications [19.478332877763417]
The emergence of powerful artificial intelligence is defining new research directions in neuroscience.
Deep reinforcement learning (Deep RL) offers a framework for studying the interplay among learning, representation and decision-making.
Deep RL offers a new set of research tools and a wide range of novel hypotheses.
arXiv Detail & Related papers (2020-07-07T19:27:54Z) - Artificial neural networks for neuroscientists: A primer [4.771833920251869]
Artificial neural networks (ANNs) are essential tools in machine learning that have drawn increasing attention in neuroscience.
In this pedagogical Primer, we introduce ANNs and demonstrate how they have been fruitfully deployed to study neuroscientific questions.
With a focus on bringing this mathematical framework closer to neurobiology, we detail how to customize the analysis, structure, and learning of ANNs.
arXiv Detail & Related papers (2020-06-01T15:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.