Spiking Neural Networks for Frame-based and Event-based Single Object
Localization
- URL: http://arxiv.org/abs/2206.06506v1
- Date: Mon, 13 Jun 2022 22:22:32 GMT
- Title: Spiking Neural Networks for Frame-based and Event-based Single Object
Localization
- Authors: Sami Barchid, Jos\'e Mennesson, Jason Eshraghian, Chaabane Dj\'eraba,
Mohammed Bennamoun
- Abstract summary: Spiking neural networks have shown much promise as an energy-efficient alternative to artificial neural networks.
We propose a spiking neural network approach for single object localization trained using surrogate gradient descent.
We compare our method with similar artificial neural networks and show that our model has competitive/better performance in accuracy, against various corruptions, and has lower energy consumption.
- Score: 26.51843464087218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks have shown much promise as an energy-efficient
alternative to artificial neural networks. However, understanding the impacts
of sensor noises and input encodings on the network activity and performance
remains difficult with common neuromorphic vision baselines like
classification. Therefore, we propose a spiking neural network approach for
single object localization trained using surrogate gradient descent, for frame-
and event-based sensors. We compare our method with similar artificial neural
networks and show that our model has competitive/better performance in
accuracy, robustness against various corruptions, and has lower energy
consumption. Moreover, we study the impact of neural coding schemes for static
images in accuracy, robustness, and energy efficiency. Our observations differ
importantly from previous studies on bio-plausible learning rules, which helps
in the design of surrogate gradient trained architectures, and offers insight
to design priorities in future neuromorphic technologies in terms of noise
characteristics and data encoding methods.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Neuromorphic Auditory Perception by Neural Spiketrum [27.871072042280712]
We introduce a neural spike coding model called spiketrumtemporal, to transform the time-varying analog signals into efficient spike patterns.
The model provides a sparse and efficient coding scheme with precisely controllable spike rate that facilitates training of spiking neural networks in various auditory perception tasks.
arXiv Detail & Related papers (2023-09-11T13:06:19Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Spiking Generative Adversarial Network with Attention Scoring Decoding [4.5727987473456055]
Spiking neural networks offer a closer approximation to brain-like processing.
We build a spiking generative adversarial network capable of handling complex images.
arXiv Detail & Related papers (2023-05-17T14:35:45Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Predify: Augmenting deep neural networks with brain-inspired predictive
coding dynamics [0.5284812806199193]
We take inspiration from a popular framework in neuroscience: 'predictive coding'
We show that implementing this strategy into two popular networks, VGG16 and EfficientNetB0, improves their robustness against various corruptions.
arXiv Detail & Related papers (2021-06-04T22:48:13Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.