Brain inspired neuronal silencing mechanism to enable reliable sequence
identification
- URL: http://arxiv.org/abs/2203.13028v2
- Date: Sun, 2 Oct 2022 07:17:38 GMT
- Title: Brain inspired neuronal silencing mechanism to enable reliable sequence
identification
- Authors: Shiri Hodassman, Yuval Meir, Karin Kisos, Itamar Ben-Noam, Yael
Tugendhaft, Amir Goldental, Roni Vardi and Ido Kanter
- Abstract summary: We present an experimental mechanism for high-precision feedforward sequence identification networks (ID-nets) without feedback loops.
This mechanism temporarily silences neurons following their recent spiking activity.
The presented mechanism opens new horizons for advanced ANN algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time sequence identification is a core use-case of artificial neural
networks (ANNs), ranging from recognizing temporal events to identifying
verification codes. Existing methods apply recurrent neural networks, which
suffer from training difficulties; however, performing this function without
feedback loops remains a challenge. Here, we present an experimental neuronal
long-term plasticity mechanism for high-precision feedforward sequence
identification networks (ID-nets) without feedback loops, wherein input objects
have a given order and timing. This mechanism temporarily silences neurons
following their recent spiking activity. Therefore, transitory objects act on
different dynamically created feedforward sub-networks. ID-nets are
demonstrated to reliably identify 10 handwritten digit sequences, and are
generalized to deep convolutional ANNs with continuous activation nodes trained
on image sequences. Counterintuitively, their classification performance, even
with a limited number of training examples, is high for sequences but low for
individual objects. ID-nets are also implemented for writer-dependent
recognition, and suggested as a cryptographic tool for encrypted
authentication. The presented mechanism opens new horizons for advanced ANN
algorithms.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Comparison of Temporal Encoders for Neuromorphic Keyword Spotting with
Few Neurons [0.11726720776908518]
Two candidate neurocomputational elements for temporal encoding and feature extraction in SNNs are investigated.
Resource-efficient keyword spotting applications may benefit from the use of these encoders, but further work on methods for learning the time constants and weights is required.
arXiv Detail & Related papers (2023-01-24T12:50:54Z) - Artificial Neuronal Ensembles with Learned Context Dependent Gating [0.0]
We introduce Learned Context Dependent Gating (LXDG), a method to flexibly allocate and recall artificial neuronal ensembles'
Activities in the hidden layers of the network are modulated by gates, which are dynamically produced during training.
We demonstrate the ability of this method to alleviate catastrophic forgetting on continual learning benchmarks.
arXiv Detail & Related papers (2023-01-17T20:52:48Z) - Surrogate Gradient Spiking Neural Networks as Encoders for Large
Vocabulary Continuous Speech Recognition [91.39701446828144]
We show that spiking neural networks can be trained like standard recurrent neural networks using the surrogate gradient method.
They have shown promising results on speech command recognition tasks.
In contrast to their recurrent non-spiking counterparts, they show robustness to exploding gradient problems without the need to use gates.
arXiv Detail & Related papers (2022-12-01T12:36:26Z) - Spiking Neural Networks for event-based action recognition: A new task to understand their advantage [1.4348901037145936]
Spiking Neural Networks (SNNs) are characterised by their unique temporal dynamics.
We show how Spiking neurons can enable temporal feature extraction in feed-forward neural networks.
We also show how recurrent SNNs can achieve comparable results to LSTM with a smaller number of parameters.
arXiv Detail & Related papers (2022-09-29T16:22:46Z) - HAN: An Efficient Hierarchical Self-Attention Network for Skeleton-Based
Gesture Recognition [73.64451471862613]
We propose an efficient hierarchical self-attention network (HAN) for skeleton-based gesture recognition.
Joint self-attention module is used to capture spatial features of fingers, the finger self-attention module is designed to aggregate features of the whole hand.
Experiments show that our method achieves competitive results on three gesture recognition datasets with much lower computational complexity.
arXiv Detail & Related papers (2021-06-25T02:15:53Z) - A Study On the Effects of Pre-processing On Spatio-temporal Action
Recognition Using Spiking Neural Networks Trained with STDP [0.0]
It is important to study the behavior of SNNs trained with unsupervised learning methods on video classification tasks.
This paper presents methods of transposing temporal information into a static format, and then transforming the visual information into spikes using latency coding.
We show the effect of the similarity in the shape and speed of certain actions on action recognition with spiking neural networks.
arXiv Detail & Related papers (2021-05-31T07:07:48Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - A Prospective Study on Sequence-Driven Temporal Sampling and Ego-Motion
Compensation for Action Recognition in the EPIC-Kitchens Dataset [68.8204255655161]
Action recognition is one of the top-challenging research fields in computer vision.
ego-motion recorded sequences have become of important relevance.
The proposed method aims to cope with it by estimating this ego-motion or camera motion.
arXiv Detail & Related papers (2020-08-26T14:44:45Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.