Fast On-Device Adaptation for Spiking Neural Networks via
Online-Within-Online Meta-Learning
- URL: http://arxiv.org/abs/2103.03901v1
- Date: Sun, 21 Feb 2021 04:28:49 GMT
- Title: Fast On-Device Adaptation for Spiking Neural Networks via
Online-Within-Online Meta-Learning
- Authors: Bleema Rosenfeld, Bipin Rajendran, Osvaldo Simeone
- Abstract summary: Spiking Neural Networks (SNNs) have recently gained popularity as machine learning models for on-device edge intelligence.
In this paper, we propose an online-within-online meta-learning rule for SNNs termed OWOML-SNN, that enables lifelong learning on a stream of tasks.
- Score: 31.78005607111787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) have recently gained popularity as machine
learning models for on-device edge intelligence for applications such as mobile
healthcare management and natural language processing due to their low power
profile. In such highly personalized use cases, it is important for the model
to be able to adapt to the unique features of an individual with only a minimal
amount of training data. Meta-learning has been proposed as a way to train
models that are geared towards quick adaptation to new tasks. The few existing
meta-learning solutions for SNNs operate offline and require some form of
backpropagation that is incompatible with the current neuromorphic
edge-devices. In this paper, we propose an online-within-online meta-learning
rule for SNNs termed OWOML-SNN, that enables lifelong learning on a stream of
tasks, and relies on local, backprop-free, nested updates.
Related papers
- Accelerating Training with Neuron Interaction and Nowcasting Networks [34.14695001650589]
Learnable update rules can be costly and unstable to train and use.
We propose NiNo to accelerate training based on weight nowcaster networks (WNNs)
arXiv Detail & Related papers (2024-09-06T17:55:49Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - A Spiking Neural Network Structure Implementing Reinforcement Learning [0.0]
In the present paper, I describe an SNN structure which, seemingly, can be used in wide range of reinforcement learning tasks.
The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model.
My concept is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability.
arXiv Detail & Related papers (2022-04-09T09:08:10Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - In-Hardware Learning of Multilayer Spiking Neural Networks on a
Neuromorphic Processor [6.816315761266531]
This work presents a spike-based backpropagation algorithm with biological plausible local update rules and adapts it to fit the constraint in a neuromorphic hardware.
The algorithm is implemented on Intel Loihi chip enabling low power in- hardware supervised online learning of multilayered SNNs for mobile applications.
arXiv Detail & Related papers (2021-05-08T09:22:21Z) - NL-CNN: A Resources-Constrained Deep Learning Model based on Nonlinear
Convolution [0.0]
A novel convolution neural network model, abbreviated NL-CNN, is proposed, where nonlinear convolution is emulated in a cascade of convolution + nonlinearity layers.
Performance evaluation for several widely known datasets is provided, showing several relevant features.
arXiv Detail & Related papers (2021-01-30T13:38:42Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Sparse Meta Networks for Sequential Adaptation and its Application to
Adaptive Language Modelling [7.859988850911321]
We introduce Sparse Meta Networks -- a meta-learning approach to learn online sequential adaptation algorithms for deep neural networks.
We augment a deep neural network with a layer-specific fast-weight memory.
We demonstrate strong performance on a variety of sequential adaptation scenarios.
arXiv Detail & Related papers (2020-09-03T17:06:52Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural
Language Understanding [97.85957811603251]
We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.
Built upon PyTorch and Transformers, MT-DNN is designed to facilitate rapid customization for a broad spectrum of NLU tasks.
A unique feature of MT-DNN is its built-in support for robust and transferable learning using the adversarial multi-task learning paradigm.
arXiv Detail & Related papers (2020-02-19T03:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.