NeuroCLIP: Neuromorphic Data Understanding by CLIP and SNN
- URL: http://arxiv.org/abs/2306.12073v2
- Date: Fri, 29 Dec 2023 02:44:44 GMT
- Title: NeuroCLIP: Neuromorphic Data Understanding by CLIP and SNN
- Authors: Yufei Guo and Yuanpei Chen and Zhe Ma
- Abstract summary: We develop NeuroCLIP, which consists of 2D CLIP and two specially designed modules for neuromorphic data understanding.
Various experiments on neuromorphic datasets including N-MNIST, CIFAR10-DVS, and ES-ImageNet demonstrate the effectiveness of NeuroCLIP.
- Score: 16.104055742259128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the neuromorphic vision sensor has received more and more interest.
However, the neuromorphic data consists of asynchronous event spikes, which
makes it difficult to construct a big benchmark to train a power general neural
network model, thus limiting the neuromorphic data understanding for ``unseen"
objects by deep learning. While for the frame image, since the training data
can be obtained easily, the zero-shot and few-shot learning for ``unseen" task
via the large Contrastive Vision-Language Pre-training (CLIP) model, which is
pre-trained by large-scale image-text pairs in 2D, have shown inspirational
performance. We wonder whether the CLIP could be transferred to neuromorphic
data recognition to handle the ``unseen" problem. To this end, we materialize
this idea with NeuroCLIP in the paper. The NeuroCLIP consists of 2D CLIP and
two specially designed modules for neuromorphic data understanding. First, an
event-frame module that could convert the event spikes to the sequential frame
image with a simple discrimination strategy. Second, an inter-timestep adapter,
which is a simple fine-tuned adapter based on a spiking neural network (SNN)
for the sequential features coming from the visual encoder of CLIP to improve
the few-shot performance. Various experiments on neuromorphic datasets
including N-MNIST, CIFAR10-DVS, and ES-ImageNet demonstrate the effectiveness
of NeuroCLIP. Our code is open-sourced at
https://github.com/yfguo91/NeuroCLIP.git.
Related papers
- Ultra-low-power Image Classification on Neuromorphic Hardware [3.784976081087904]
Spiking neural networks (SNNs) promise ultra-low-power applications by exploiting temporal and spatial sparsity.
Training such SNNs using backpropagation through time for vision tasks that rely mainly on spatial features is computationally costly.
We propose a temporal ANN-to-SNN conversion method, which we call Quartz, that is based on the time to first spike.
arXiv Detail & Related papers (2023-09-28T18:53:43Z) - Co-learning synaptic delays, weights and adaptation in spiking neural
networks [0.0]
Spiking neural networks (SNN) distinguish themselves from artificial neural networks (ANN) because of their inherent temporal processing and spike-based computations.
We show that data processing with spiking neurons can be enhanced by co-learning the connection weights with two other biologically inspired neuronal features.
arXiv Detail & Related papers (2023-09-12T09:13:26Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Visual Attention Network [90.0753726786985]
We propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention.
We also introduce a novel neural network based on LKA, namely Visual Attention Network (VAN)
VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments.
arXiv Detail & Related papers (2022-02-20T06:35:18Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - N-Omniglot: a Large-scale Neuromorphic Dataset for Spatio-Temporal
Sparse Few-shot Learning [10.812738608234321]
We provide the first neuromorphic dataset: N- Omniglot, using the Dynamic Vision Sensor (DVS)
It contains 1623 categories of handwritten characters, with only 20 samples per class.
The dataset provides a powerful challenge and a suitable benchmark for developing SNNs algorithm in the few-shot learning domain.
arXiv Detail & Related papers (2021-12-25T12:41:34Z) - Comparing SNNs and RNNs on Neuromorphic Vision Datasets: Similarities
and Differences [36.82069150045153]
Spiking neural networks (SNNs) and recurrent neural networks (RNNs) are benchmarked on neuromorphic data.
In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data.
arXiv Detail & Related papers (2020-05-02T10:19:37Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.