Training Robust Spiking Neural Networks with ViewPoint Transform and
SpatioTemporal Stretching
- URL: http://arxiv.org/abs/2303.07609v1
- Date: Tue, 14 Mar 2023 03:09:56 GMT
- Title: Training Robust Spiking Neural Networks with ViewPoint Transform and
SpatioTemporal Stretching
- Authors: Haibo Shen, Juyu Xiao, Yihao Luo, Xiang Cao, Liangqi Zhang, Tianjiang
Wang
- Abstract summary: We propose a novel data augmentation method, ViewPoint Transform and Spatio Stretching (VPT-STS)
It improves the robustness of spiking neural networks by transforming the rotation centers and angles in thetemporal domain to generate samples from different viewpoints.
Experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is broadly effective on multi-event representations and significantly outperforms pure spatial geometric transformations.
- Score: 4.736525128377909
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic vision sensors (event cameras) simulate biological visual
perception systems and have the advantages of high temporal resolution, less
data redundancy, low power consumption, and large dynamic range. Since both
events and spikes are modeled from neural signals, event cameras are inherently
suitable for spiking neural networks (SNNs), which are considered promising
models for artificial intelligence (AI) and theoretical neuroscience. However,
the unconventional visual signals of these cameras pose a great challenge to
the robustness of spiking neural networks. In this paper, we propose a novel
data augmentation method, ViewPoint Transform and SpatioTemporal Stretching
(VPT-STS). It improves the robustness of SNNs by transforming the rotation
centers and angles in the spatiotemporal domain to generate samples from
different viewpoints. Furthermore, we introduce the spatiotemporal stretching
to avoid potential information loss in viewpoint transformation. Extensive
experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is
broadly effective on multi-event representations and significantly outperforms
pure spatial geometric transformations. Notably, the SNNs model with VPT-STS
achieves a state-of-the-art accuracy of 84.4\% on the DVS-CIFAR10 dataset.
Related papers
- A frugal Spiking Neural Network for unsupervised classification of continuous multivariate temporal data [0.0]
Spiking Neural Networks (SNNs) are neuromorphic and use more biologically plausible neurons with evolving membrane potentials.
We introduce here a frugal single-layer SNN designed for fully unsupervised identification and classification of multivariate temporal patterns in continuous data.
arXiv Detail & Related papers (2024-08-08T08:15:51Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - A Novel Spike Transformer Network for Depth Estimation from Event Cameras via Cross-modality Knowledge Distillation [3.355813093377501]
Event cameras operate differently from traditional digital cameras, continuously capturing data and generating binary spikes that encode time, location, and light intensity.
This necessitates the development of innovative, spike-aware algorithms tailored for event cameras.
We propose a purely spike-driven spike transformer network for depth estimation from spiking camera data.
arXiv Detail & Related papers (2024-04-26T11:32:53Z) - A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation [16.696524554516294]
We develop a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator.
Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN.
Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
arXiv Detail & Related papers (2024-04-08T20:42:10Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Exploiting High Performance Spiking Neural Networks with Efficient
Spiking Patterns [4.8416725611508244]
Spiking Neural Networks (SNNs) use discrete spike sequences to transmit information, which significantly mimics the information transmission of the brain.
This paper introduces the dynamic Burst pattern and designs the Leaky Integrate and Fire or Burst (LIFB) neuron that can make a trade-off between short-time performance and dynamic temporal performance.
arXiv Detail & Related papers (2023-01-29T04:22:07Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Comparing SNNs and RNNs on Neuromorphic Vision Datasets: Similarities
and Differences [36.82069150045153]
Spiking neural networks (SNNs) and recurrent neural networks (RNNs) are benchmarked on neuromorphic data.
In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data.
arXiv Detail & Related papers (2020-05-02T10:19:37Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.