Optimized spiking neurons classify images with high accuracy through
temporal coding with two spikes
- URL: http://arxiv.org/abs/2002.00860v4
- Date: Tue, 26 Jan 2021 07:57:22 GMT
- Title: Optimized spiking neurons classify images with high accuracy through
temporal coding with two spikes
- Authors: Christoph St\"ockl and Wolfgang Maass
- Abstract summary: Spike-based neuromorphic hardware promises to reduce the energy consumption of image classification and other deep learning applications.
Previous methods for converting trained artificial neural networks to spiking neurons were inefficient because the neurons had to emit too many spikes.
We show that a substantially more efficient conversion arises when one optimize the spiking neuron model for that purpose.
- Score: 1.7767466724342065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spike-based neuromorphic hardware promises to reduce the energy consumption
of image classification and other deep learning applications, particularly on
mobile phones or other edge devices. However, direct training of deep spiking
neural networks is difficult, and previous methods for converting trained
artificial neural networks to spiking neurons were inefficient because the
neurons had to emit too many spikes. We show that a substantially more
efficient conversion arises when one optimizes the spiking neuron model for
that purpose, so that it not only matters for information transmission how many
spikes a neuron emits, but also when it emits those spikes. This advances the
accuracy that can be achieved for image classification with spiking neurons,
and the resulting networks need on average just two spikes per neuron for
classifying an image. In addition, our new conversion method improves latency
and throughput of the resulting spiking networks.
Related papers
- An experimental comparative study of backpropagation and alternatives for training binary neural networks for image classification [1.0749601922718608]
Binary neural networks promise to reduce the size of deep neural network models.
They may allow the deployment of more powerful models on edge devices.
However, binary neural networks are still proven to be difficult to train using the backpropagation-based gradient descent scheme.
arXiv Detail & Related papers (2024-08-08T13:39:09Z) - When Spiking neural networks meet temporal attention image decoding and adaptive spiking neuron [7.478056407323783]
Spiking Neural Networks (SNNs) are capable of encoding and processing temporal information in a biologically plausible way.
We propose a novel method for image decoding based on temporal attention (TAID) and an adaptive Leaky-Integrate-and-Fire neuron model.
arXiv Detail & Related papers (2024-06-05T08:21:55Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Smooth Exact Gradient Descent Learning in Spiking Neural Networks [0.0]
We demonstrate exact gradient descent learning based on spiking dynamics that change only continuously.
Our results show how non-disruptive learning is possible despite discrete spikes.
arXiv Detail & Related papers (2023-09-25T20:51:00Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Surrogate Gradient Spiking Neural Networks as Encoders for Large
Vocabulary Continuous Speech Recognition [91.39701446828144]
We show that spiking neural networks can be trained like standard recurrent neural networks using the surrogate gradient method.
They have shown promising results on speech command recognition tasks.
In contrast to their recurrent non-spiking counterparts, they show robustness to exploding gradient problems without the need to use gates.
arXiv Detail & Related papers (2022-12-01T12:36:26Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Training Deep Spiking Auto-encoders without Bursting or Dying Neurons
through Regularization [9.34612743192798]
Spiking neural networks are a promising approach towards next-generation models of the brain in computational neuroscience.
We apply end-to-end learning with membrane potential-based backpropagation to a spiking convolutional auto-encoder.
We show that applying regularization on membrane potential and spiking output successfully avoids both dead and bursting neurons.
arXiv Detail & Related papers (2021-09-22T21:27:40Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.