Deep Spiking Convolutional Neural Network for Single Object Localization
Based On Deep Continuous Local Learning
- URL: http://arxiv.org/abs/2105.05609v1
- Date: Wed, 12 May 2021 12:02:05 GMT
- Title: Deep Spiking Convolutional Neural Network for Single Object Localization
Based On Deep Continuous Local Learning
- Authors: Sami Barchid, Jos\'e Mennesson, Chaabane Dj\'eraba
- Abstract summary: We propose a deep convolutional spiking neural network for the localization of a single object in a grayscale image.
Results reported on Oxford-IIIT-Pet validates the exploitation of spiking neural networks with a supervised learning approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advent of neuromorphic hardware, spiking neural networks can be a
good energy-efficient alternative to artificial neural networks. However, the
use of spiking neural networks to perform computer vision tasks remains
limited, mainly focusing on simple tasks such as digit recognition. It remains
hard to deal with more complex tasks (e.g. segmentation, object detection) due
to the small number of works on deep spiking neural networks for these tasks.
The objective of this paper is to make the first step towards modern computer
vision with supervised spiking neural networks. We propose a deep convolutional
spiking neural network for the localization of a single object in a grayscale
image. We propose a network based on DECOLLE, a spiking model that enables
local surrogate gradient-based learning. The encouraging results reported on
Oxford-IIIT-Pet validates the exploitation of spiking neural networks with a
supervised learning approach for more elaborate vision tasks in the future.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Functional Connectome: Approximating Brain Networks with Artificial
Neural Networks [1.952097552284465]
We show that trained deep neural networks are able to capture the computations performed by synthetic biological networks with high accuracy.
We show that trained deep neural networks are able to perform zero-shot generalisation in novel environments.
Our study reveals a novel and promising direction in systems neuroscience, and can be expanded upon with a multitude of downstream applications.
arXiv Detail & Related papers (2022-11-23T13:12:13Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Deep physical neural networks enabled by a backpropagation algorithm for
arbitrary physical systems [3.7785805908699803]
We propose a radical alternative for implementing deep neural network models: Physical Neural Networks.
We introduce a hybrid physical-digital algorithm called Physics-Aware Training to efficiently train sequences of controllable physical systems to act as deep neural networks.
arXiv Detail & Related papers (2021-04-27T18:00:02Z) - Theoretical Analysis of the Advantage of Deepening Neural Networks [0.0]
It is important to know the expressivity of functions computable by deep neural networks.
By the two criteria, we show that to increase layers is more effective than to increase units at each layer on improving the expressivity of deep neural networks.
arXiv Detail & Related papers (2020-09-24T04:10:50Z) - Effective and Efficient Computation with Multiple-timescale Spiking
Recurrent Neural Networks [0.9790524827475205]
We show how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance.
We calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks.
arXiv Detail & Related papers (2020-05-24T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.