Brain-inspired Multilayer Perceptron with Spiking Neurons
- URL: http://arxiv.org/abs/2203.14679v1
- Date: Mon, 28 Mar 2022 12:21:47 GMT
- Title: Brain-inspired Multilayer Perceptron with Spiking Neurons
- Authors: Wenshuo Li, Hanting Chen, Jianyuan Guo, Ziyang Zhang, Yunhe Wang
- Abstract summary: Spiking Network (SNN) is the most famous brain-inspired neural network.
We introduce information communication mechanisms from brain-inspired neural networks.
With LIF modules, our SNN-MLP model achieves 81.9%, 83.3% and 83.5% top-1 accuracy on ImageNet dataset.
- Score: 41.600417794312506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Multilayer Perceptron (MLP) becomes the hotspot in the field of
computer vision tasks. Without inductive bias, MLPs perform well on feature
extraction and achieve amazing results. However, due to the simplicity of their
structures, the performance highly depends on the local features communication
machenism. To further improve the performance of MLP, we introduce information
communication mechanisms from brain-inspired neural networks. Spiking Neural
Network (SNN) is the most famous brain-inspired neural network, and achieve
great success on dealing with sparse data. Leaky Integrate and Fire (LIF)
neurons in SNNs are used to communicate between different time steps. In this
paper, we incorporate the machanism of LIF neurons into the MLP models, to
achieve better accuracy without extra FLOPs. We propose a full-precision LIF
operation to communicate between patches, including horizontal LIF and vertical
LIF in different directions. We also propose to use group LIF to extract better
local features. With LIF modules, our SNN-MLP model achieves 81.9%, 83.3% and
83.5% top-1 accuracy on ImageNet dataset with only 4.4G, 8.5G and 15.2G FLOPs,
respectively, which are state-of-the-art results as far as we know.
Related papers
- When Spiking neural networks meet temporal attention image decoding and adaptive spiking neuron [7.478056407323783]
Spiking Neural Networks (SNNs) are capable of encoding and processing temporal information in a biologically plausible way.
We propose a novel method for image decoding based on temporal attention (TAID) and an adaptive Leaky-Integrate-and-Fire neuron model.
arXiv Detail & Related papers (2024-06-05T08:21:55Z) - CLIF: Complementary Leaky Integrate-and-Fire Neuron for Spiking Neural Networks [5.587069105667678]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
It remains a challenge to train SNNs due to their undifferentiable spiking mechanism.
We propose Leaky Integrate-and-Fire Neuron-based SNNs and Complementary Leaky Integrate-and-Fire Neuron.
arXiv Detail & Related papers (2024-02-07T08:51:57Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Convolutional Neural Networks Exploiting Attributes of Biological
Neurons [7.3517426088986815]
Deep neural networks like Convolutional Neural Networks (CNNs) have emerged as front-runners, often surpassing human capabilities.
Here, we integrate the principles of biological neurons in certain layer(s) of CNNs.
We aim to extract image features to use as input to CNNs, hoping to enhance training efficiency and achieve better accuracy.
arXiv Detail & Related papers (2023-11-14T16:58:18Z) - KLIF: An optimized spiking neuron unit for tuning surrogate gradient
slope and membrane potential [0.0]
Spiking neural networks (SNNs) have attracted much attention due to their ability to process temporal information.
It is still challenging to develop efficient and high-performing learning algorithms for SNNs.
We propose a novel k-based leaky Integrate-and-Fire neuron model to improve the learning ability of SNNs.
arXiv Detail & Related papers (2023-02-18T05:18:18Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for
Image Recognition [123.59890802196797]
We propose RepMLP, a multi-layer-perceptron-style neural network building block for image recognition.
We construct convolutional layers inside a RepMLP during training and merge them into the FC for inference.
By inserting RepMLP in traditional CNN, we improve ResNets by 1.8% accuracy on ImageNet, 2.9% for face recognition, and 2.3% mIoU on Cityscapes with lower FLOPs.
arXiv Detail & Related papers (2021-05-05T06:17:40Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.