Spiking-GAN: A Spiking Generative Adversarial Network Using
Time-To-First-Spike Coding
- URL: http://arxiv.org/abs/2106.15420v1
- Date: Tue, 29 Jun 2021 13:43:07 GMT
- Title: Spiking-GAN: A Spiking Generative Adversarial Network Using
Time-To-First-Spike Coding
- Authors: Vineet Kotariya, Udayan Ganguly
- Abstract summary: We propose Spiking-GAN, the first spike-based Generative Adversarial Network (GAN)
It employs a kind of temporal coding scheme called time-to-first-spike coding.
Our modified temporal loss function called 'Aggressive TTFS' improves the inference time of the network by over 33% and reduces the number of spikes in the network by more than 11% compared to previous works.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) have shown great potential in solving deep
learning problems in an energy-efficient manner. However, they are still
limited to simple classification tasks. In this paper, we propose Spiking-GAN,
the first spike-based Generative Adversarial Network (GAN). It employs a kind
of temporal coding scheme called time-to-first-spike coding. We train it using
approximate backpropagation in the temporal domain. We use simple
integrate-and-fire (IF) neurons with very high refractory period for our
network which ensures a maximum of one spike per neuron. This makes the model
much sparser than a spike rate-based system. Our modified temporal loss
function called 'Aggressive TTFS' improves the inference time of the network by
over 33% and reduces the number of spikes in the network by more than 11%
compared to previous works. Our experiments show that on training the network
on the MNIST dataset using this approach, we can generate high quality samples.
Thereby demonstrating the potential of this framework for solving such problems
in the spiking domain.
Related papers
- LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Sparse-firing regularization methods for spiking neural networks with
time-to-first spike coding [2.5234156040689237]
We propose two spike timing-based sparse-firing (SSR) regularization methods to further reduce the firing frequency of TTFS-coded neural networks.
The effects of these regularization methods were investigated on the MNIST, Fashion-MNIST, and CIFAR-10 datasets.
arXiv Detail & Related papers (2023-07-24T11:55:49Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK [86.45209429863858]
We study training one-hidden-layer ReLU networks in the neural tangent kernel (NTK) regime.
We show that the neural networks possess a different limiting kernel which we call textitbias-generalized NTK
We also study various properties of the neural networks with this new kernel.
arXiv Detail & Related papers (2023-01-01T02:11:39Z) - Desire Backpropagation: A Lightweight Training Algorithm for Multi-Layer
Spiking Neural Networks based on Spike-Timing-Dependent Plasticity [13.384228628766236]
Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks.
We present desire backpropagation, a method to derive the desired spike activity of all neurons, including the hidden ones.
We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively.
arXiv Detail & Related papers (2022-11-10T08:32:13Z) - A temporally and spatially local spike-based backpropagation algorithm
to enable training in hardware [0.0]
Spiking Neural Networks (SNNs) have emerged as a hardware efficient architecture for classification tasks.
There have been several attempts to adopt the powerful backpropagation (BP) technique used in non-spiking artificial neural networks (ANNs)
arXiv Detail & Related papers (2022-07-20T08:57:53Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Supervised Learning in Temporally-Coded Spiking Neural Networks with
Approximate Backpropagation [0.021506382989223777]
We propose a new supervised learning method for temporally-encoded multilayer spiking networks to perform classification.
The method employs a reinforcement signal that mimics backpropagation but is far less computationally intensive.
In simulated MNIST handwritten digit classification, two-layer networks trained with this rule matched the performance of a comparable backpropagation based non-spiking network.
arXiv Detail & Related papers (2020-07-27T03:39:49Z) - Coarse scale representation of spiking neural networks: backpropagation
through spikes and application to neuromorphic hardware [0.0]
We explore recurrent representations of leaky integrate and fire neurons operating at a timescale equal to their absolute refractory period.
We find that the recurrent model leads to high classification accuracy using just 4-long spike trains during training.
We also observed a good transfer back to continuous implementations of leaky integrate and fire neurons.
arXiv Detail & Related papers (2020-07-13T04:02:35Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.