Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with
Spiking Neural Networks
- URL: http://arxiv.org/abs/2308.10187v4
- Date: Fri, 22 Sep 2023 02:19:24 GMT
- Title: Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with
Spiking Neural Networks
- Authors: Mingxuan Liu, Jie Gan, Rui Wen, Tao Li, Yongli Chen, and Hong Chen
- Abstract summary: Spiking neural networks (SNNs) have tremendous potential for energy-efficient neuromorphic chips.
We propose a Spiking-Diffusion model, which is based on the vector quantized discrete diffusion model.
Experimental results on MNIST, FMNIST, KMNIST, Letters, and Cifar10 demonstrate that Spiking-Diffusion outperforms the existing SNN-based generation model.
- Score: 13.586012318909907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) have tremendous potential for energy-efficient
neuromorphic chips due to their binary and event-driven architecture. SNNs have
been primarily used in classification tasks, but limited exploration on image
generation tasks. To fill the gap, we propose a Spiking-Diffusion model, which
is based on the vector quantized discrete diffusion model. First, we develop a
vector quantized variational autoencoder with SNNs (VQ-SVAE) to learn a
discrete latent space for images. In VQ-SVAE, image features are encoded using
both the spike firing rate and postsynaptic potential, and an adaptive spike
generator is designed to restore embedding features in the form of spike
trains. Next, we perform absorbing state diffusion in the discrete latent space
and construct a spiking diffusion image decoder (SDID) with SNNs to denoise the
image. Our work is the first to build the diffusion model entirely from SNN
layers. Experimental results on MNIST, FMNIST, KMNIST, Letters, and Cifar10
demonstrate that Spiking-Diffusion outperforms the existing SNN-based
generation model. We achieve FIDs of 37.50, 91.98, 59.23, 67.41, and 120.5 on
the above datasets respectively, with reductions of 58.60\%, 18.75\%, 64.51\%,
29.75\%, and 44.88\% in FIDs compared with the state-of-art work. Our code will
be available at \url{https://github.com/Arktis2022/Spiking-Diffusion}.
Related papers
- Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection [15.154553304520164]
Spiking Neural Networks (SNNs) have bio-plaus and low-power advantages over Artificial Neural Networks (ANNs)
In this work, we focus on bridging the performance gap between ANNs and SNNs on object detection.
We design a SpikeYOLO architecture to solve this problem by simplifying the vanilla YOLO and incorporating meta SNN blocks.
arXiv Detail & Related papers (2024-07-30T10:04:16Z) - SDiT: Spiking Diffusion Model with Transformer [1.7630597106970465]
Spiking neural networks (SNNs) have low power consumption and bio-interpretable characteristics.
We utilize transformer to replace the commonly used U-net structure in mainstream diffusion models.
It can generate higher quality images with relatively lower computational cost and shorter sampling time.
arXiv Detail & Related papers (2024-02-18T13:42:11Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Fully Spiking Denoising Diffusion Implicit Models [61.32076130121347]
Spiking neural networks (SNNs) have garnered considerable attention owing to their ability to run on neuromorphic devices with super-high speeds.
We propose a novel approach fully spiking denoising diffusion implicit model (FSDDIM) to construct a diffusion model within SNNs.
We demonstrate that the proposed method outperforms the state-of-the-art fully spiking generative model.
arXiv Detail & Related papers (2023-12-04T09:07:09Z) - ESVAE: An Efficient Spiking Variational Autoencoder with Reparameterizable Poisson Spiking Sampling [20.36674120648714]
Variational autoencoders (VAEs) are one of the most popular image generation models.
Current VAE methods implicitly construct the latent space by an elaborated autoregressive network.
We propose an efficient spiking variational autoencoder (ESVAE) that constructs an interpretable latent space distribution.
arXiv Detail & Related papers (2023-10-23T12:01:10Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Spikingformer: Spike-driven Residual Learning for Transformer-based
Spiking Neural Network [19.932683405796126]
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks.
SNNs suffer from non-spike computations caused by the structure of their residual connection.
We develop Spikingformer, a pure transformer-based spiking neural network.
arXiv Detail & Related papers (2023-04-24T09:44:24Z) - Spikformer: When Spiking Neural Network Meets Transformer [102.91330530210037]
We consider two biologically plausible structures, the Spiking Neural Network (SNN) and the self-attention mechanism.
We propose a novel Spiking Self Attention (SSA) as well as a powerful framework, named Spiking Transformer (Spikformer)
arXiv Detail & Related papers (2022-09-29T14:16:49Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Fully Spiking Variational Autoencoder [66.58310094608002]
Spiking neural networks (SNNs) can be run on neuromorphic devices with ultra-high speed and ultra-low energy consumption.
In this study, we build a variational autoencoder (VAE) with SNN to enable image generation.
arXiv Detail & Related papers (2021-09-26T06:10:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.