Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP
- URL: http://arxiv.org/abs/2306.04410v1
- Date: Wed, 7 Jun 2023 13:08:46 GMT
- Title: Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP
- Authors: Arsham Gholamzadeh Khoee, Alireza Javaheri, Saeed Reza Kheradpisheh
and Mohammad Ganjtabesh
- Abstract summary: We propose a bio-plausible meta-learning model inspired by the hippocampus and the prefrontal cortex.
Our new model can easily be applied to spike-based neuromorphic devices and enables fast learning in neuromorphic hardware.
- Score: 2.179313476241343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human brain constantly learns and rapidly adapts to new situations by
integrating acquired knowledge and experiences into memory. Developing this
capability in machine learning models is considered an important goal of AI
research since deep neural networks perform poorly when there is limited data
or when they need to adapt quickly to new unseen tasks. Meta-learning models
are proposed to facilitate quick learning in low-data regimes by employing
absorbed information from the past. Although some models have recently been
introduced that reached high-performance levels, they are not biologically
plausible. We have proposed a bio-plausible meta-learning model inspired by the
hippocampus and the prefrontal cortex using spiking neural networks with a
reward-based learning system. Our proposed model includes a memory designed to
prevent catastrophic forgetting, a phenomenon that occurs when meta-learning
models forget what they have learned as soon as the new task begins. Also, our
new model can easily be applied to spike-based neuromorphic devices and enables
fast learning in neuromorphic hardware. The final analysis will discuss the
implications and predictions of the model for solving few-shot classification
tasks. In solving these tasks, our model has demonstrated the ability to
compete with the existing state-of-the-art meta-learning techniques.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Learning-to-learn enables rapid learning with phase-change memory-based in-memory computing [38.34244217803562]
A growing demand for low-power, autonomously learning artificial intelligence (AI) systems can be applied at the edge and rapidly adapt to the specific situation at deployment site.
In this work, we pair L2L with in-memory computing neuromorphic hardware to build efficient AI models that can rapidly adapt to new tasks.
We demonstrate the versatility of our approach in two scenarios: a convolutional neural network performing image classification and a biologically-inspired spiking neural network generating motor commands for a real robotic arm.
arXiv Detail & Related papers (2024-04-22T15:03:46Z) - A Survey on Knowledge Editing of Neural Networks [43.813073385305806]
Even the largest artificial neural networks make mistakes, and once-correct predictions can become invalid as the world progresses in time.
Knowledge editing is emerging as a novel area of research that aims to enable reliable, data-efficient, and fast changes to a pre-trained target model.
We first introduce the problem of editing neural networks, formalize it in a common framework and differentiate it from more notorious branches of research such as continuous learning.
arXiv Detail & Related papers (2023-10-30T16:29:47Z) - Neural Routing in Meta Learning [9.070747377130472]
We aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks.
In this work, we describe an approach that investigates task-dependent dynamic neuron selection in deep convolutional neural networks (CNNs) by leveraging the scaling factor in the batch normalization layer.
We find that the proposed approach, neural routing in meta learning (NRML), outperforms one of the well-known existing meta learning baselines on few-shot classification tasks.
arXiv Detail & Related papers (2022-10-14T16:31:24Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.