A Deep Unsupervised Feature Learning Spiking Neural Network with
Binarized Classification Layers for EMNIST Classification using SpykeFlow
- URL: http://arxiv.org/abs/2002.11843v4
- Date: Wed, 28 Oct 2020 12:54:03 GMT
- Title: A Deep Unsupervised Feature Learning Spiking Neural Network with
Binarized Classification Layers for EMNIST Classification using SpykeFlow
- Authors: Ruthvik Vaila, John Chiasson, Vishal Saxena
- Abstract summary: unsupervised learning technique of spike timing dependent plasticity (STDP) using binary activations are used to extract features from spiking input data.
The accuracies obtained for the balanced EMNIST data set compare favorably with other approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: End user AI is trained on large server farms with data collected from the
users. With ever increasing demand for IOT devices, there is a need for deep
learning approaches that can be implemented (at the edge) in an energy
efficient manner. In this work we approach this using spiking neural networks.
The unsupervised learning technique of spike timing dependent plasticity (STDP)
using binary activations are used to extract features from spiking input data.
Gradient descent (backpropagation) is used only on the output layer to perform
the training for classification. The accuracies obtained for the balanced
EMNIST data set compare favorably with other approaches. The effect of
stochastic gradient descent (SGD) approximations on learning capabilities of
our network are also explored.
Related papers
- Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams [52.77024349608834]
We classify transient noise signals (i.e.glitches) and gravitational waves in data from the Advanced LIGO detectors.
We use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset.
We also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels.
arXiv Detail & Related papers (2023-03-24T11:12:37Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Deep Features for CBIR with Scarce Data using Hebbian Learning [17.57322804741561]
We study the performance of biologically inspired textitHebbian learning algorithms in the development of feature extractors for Content Based Image Retrieval (CBIR) tasks.
Specifically, we consider a semi-supervised learning strategy in two steps: first, an unsupervised pre-training stage; second, the network is fine-tuned on the image dataset.
arXiv Detail & Related papers (2022-05-18T14:00:54Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Unsupervised Deep Learning by Injecting Low-Rank and Sparse Priors [5.5586788751870175]
We focus on employing sparsity-inducing priors in deep learning to encourage the network to concisely capture the nature of high-dimensional data.
We demonstrate unsupervised learning of U-Net for background subtraction using low-rank and sparse priors.
arXiv Detail & Related papers (2021-06-21T08:41:02Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Improving STDP-based Visual Feature Learning with Whitening [1.9981375888949475]
In this paper, we propose to use whitening as a pre-processing step before learning features with STDP.
Experiments on CIFAR-10 show that whitening allows STDP to learn visual features that are closer to the ones learned with standard neural networks.
We also propose an approximation of whitening as convolution kernels that is computationally cheaper to learn and more suited to be implemented on neuromorphic hardware.
arXiv Detail & Related papers (2020-02-24T11:48:22Z) - Biologically-Motivated Deep Learning Method using Hierarchical
Competitive Learning [0.0]
I propose to introduce unsupervised competitive learning which only requires forward propagating signals as a pre-training method for CNNs.
The proposed method could be useful for a variety of poorly labeled data, for example, time series or medical data.
arXiv Detail & Related papers (2020-01-04T20:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.