Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning
- URL: http://arxiv.org/abs/2005.00723v1
- Date: Sat, 2 May 2020 06:41:20 GMT
- Title: Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning
- Authors: Qiang Yu, Shenglan Li, Huajin Tang, Longbiao Wang, Jianwu Dang, Kay
Chen Tan
- Abstract summary: We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
- Score: 59.249322621035056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spikes are the currency in central nervous systems for information
transmission and processing. They are also believed to play an essential role
in low-power consumption of the biological systems, whose efficiency attracts
increasing attentions to the field of neuromorphic computing. However,
efficient processing and learning of discrete spikes still remains as a
challenging problem. In this paper, we make our contributions towards this
direction. A simplified spiking neuron model is firstly introduced with effects
of both synaptic input and firing output on membrane potential being modeled
with an impulse function. An event-driven scheme is then presented to further
improve the processing efficiency. Based on the neuron model, we propose two
new multi-spike learning rules which demonstrate better performance over other
baselines on various tasks including association, classification, feature
detection. In addition to efficiency, our learning rules demonstrate a high
robustness against strong noise of different types. They can also be
generalized to different spike coding schemes for the classification task, and
notably single neuron is capable of solving multi-category classifications with
our learning rules. In the feature detection task, we re-examine the ability of
unsupervised STDP with its limitations being presented, and find a new
phenomenon of losing selectivity. In contrast, our proposed learning rules can
reliably solve the task over a wide range of conditions without specific
constraints being applied. Moreover, our rules can not only detect features but
also discriminate them. The improved performance of our methods would
contribute to neuromorphic computing as a preferable choice.
Related papers
- Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Neural Routing in Meta Learning [9.070747377130472]
We aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks.
In this work, we describe an approach that investigates task-dependent dynamic neuron selection in deep convolutional neural networks (CNNs) by leveraging the scaling factor in the batch normalization layer.
We find that the proposed approach, neural routing in meta learning (NRML), outperforms one of the well-known existing meta learning baselines on few-shot classification tasks.
arXiv Detail & Related papers (2022-10-14T16:31:24Z) - An Adaptive Contrastive Learning Model for Spike Sorting [12.043679000694258]
In neuroscience research, it is important to separate out the activity of individual neurons.
With the development of large-scale silicon technology, artificially interpreting and labeling spikes is becoming increasingly impractical.
We propose a novel modeling framework that learns representations from spikes through contrastive learning.
arXiv Detail & Related papers (2022-05-24T09:18:46Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Identifying Learning Rules From Neural Network Observables [26.96375335939315]
We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes.
Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities, may provide a good basis on which to identify learning rules.
arXiv Detail & Related papers (2020-10-22T14:36:54Z) - Advantages of biologically-inspired adaptive neural activation in RNNs
during learning [10.357949759642816]
We introduce a novel parametric family of nonlinear activation functions inspired by input-frequency response curves of biological neurons.
We find that activation adaptation provides distinct task-specific solutions and in some cases, improves both learning speed and performance.
arXiv Detail & Related papers (2020-06-22T13:49:52Z) - Synaptic Learning with Augmented Spikes [14.76595318993715]
With a more brain-like processing paradigm, spiking neurons are more promising for improvements on efficiency and computational capability.
We introduce a concept of augmented spikes to carry complementary information with spike coefficients in addition to spike latencies.
New augmented spiking neuron model and synaptic learning rules are proposed to process and learn patterns of augmented spikes.
arXiv Detail & Related papers (2020-05-11T01:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.