Bio-inspired visual attention for silicon retinas based on spiking
neural networks applied to pattern classification
- URL: http://arxiv.org/abs/2105.14753v1
- Date: Mon, 31 May 2021 07:34:13 GMT
- Title: Bio-inspired visual attention for silicon retinas based on spiking
neural networks applied to pattern classification
- Authors: Am\'elie Gruel and Jean Martinet
- Abstract summary: Spiking Neural Networks (SNNs) represent an asynchronous type of artificial neural network closer to biology than traditional artificial networks.
We introduce a case study of event videos classification with SNNs, using a biology-grounded low-level computational attention mechanism.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual attention can be defined as the behavioral and cognitive process of
selectively focusing on a discrete aspect of sensory cues while disregarding
other perceivable information. This biological mechanism, more specifically
saliency detection, has long been used in multimedia indexing to drive the
analysis only on relevant parts of images or videos for further processing.
The recent advent of silicon retinas (or event cameras -- sensors that
measure pixel-wise changes in brightness and output asynchronous events
accordingly) raises the question of how to adapt attention and saliency to the
unconventional type of such sensors' output. Silicon retina aims to reproduce
the biological retina behaviour. In that respect, they produce punctual events
in time that can be construed as neural spikes and interpreted as such by a
neural network.
In particular, Spiking Neural Networks (SNNs) represent an asynchronous type
of artificial neural network closer to biology than traditional artificial
networks, mainly because they seek to mimic the dynamics of neural membrane and
action potentials over time. SNNs receive and process information in the form
of spike trains. Therefore, they make for a suitable candidate for the
efficient processing and classification of incoming event patterns measured by
silicon retinas. In this paper, we review the biological background behind the
attentional mechanism, and introduce a case study of event videos
classification with SNNs, using a biology-grounded low-level computational
attention mechanism, with interesting preliminary results.
Related papers
- A frugal Spiking Neural Network for unsupervised classification of continuous multivariate temporal data [0.0]
Spiking Neural Networks (SNNs) are neuromorphic and use more biologically plausible neurons with evolving membrane potentials.
We introduce here a frugal single-layer SNN designed for fully unsupervised identification and classification of multivariate temporal patterns in continuous data.
arXiv Detail & Related papers (2024-08-08T08:15:51Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Efficient visual object representation using a biologically plausible
spike-latency code and winner-take-all inhibition [0.0]
spiking neural networks (SNNs) have the potential to improve the efficiency and biological plausibility of object recognition systems.
We present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli.
We demonstrate that a network of 150 spiking neurons can efficiently represent objects with as little as 40 spikes.
arXiv Detail & Related papers (2022-05-20T17:48:02Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - The Butterfly Effect in Primary Visual Cortex [5.954654488330137]
We propose a novel neural network, called continuous-coupled neural network (CCNN)
Numerical results show that the CCNN model exhibits periodic behavior under DC stimulus, and exhibits chaotic behavior under AC stimulus.
Experimental results on image segmentation indicate that the CCNN model has better performance than the state-of-the-art of visual cortex neural network models.
arXiv Detail & Related papers (2021-04-15T06:04:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.