The Spike Gating Flow: A Hierarchical Structure Based Spiking Neural
Network for Online Gesture Recognition
- URL: http://arxiv.org/abs/2206.01910v2
- Date: Tue, 7 Jun 2022 05:33:19 GMT
- Title: The Spike Gating Flow: A Hierarchical Structure Based Spiking Neural
Network for Online Gesture Recognition
- Authors: Zihao Zhao, Yanhong Wang, Qiaosha Zou, Tie Xu, Fangbo Tao, Jiansong
Zhang, Xiaoan Wang, C.-J. Richard Shi, Junwen Luo and Yuan Xie
- Abstract summary: We develop a novel brain-inspired Spiking Neural Network (SNN) based system titled Spiking Gating Flow (SGF) for online action learning.
To the best of our knowledge, this is the highest accuracy among the non-backpropagation algorithm based SNNs.
- Score: 12.866549161582412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Action recognition is an exciting research avenue for artificial intelligence
since it may be a game changer in the emerging industrial fields such as
robotic visions and automobiles. However, current deep learning faces major
challenges for such applications because of the huge computational cost and the
inefficient learning. Hence, we develop a novel brain-inspired Spiking Neural
Network (SNN) based system titled Spiking Gating Flow (SGF) for online action
learning. The developed system consists of multiple SGF units which assembled
in a hierarchical manner. A single SGF unit involves three layers: a feature
extraction layer, an event-driven layer and a histogram-based training layer.
To demonstrate the developed system capabilities, we employ a standard Dynamic
Vision Sensor (DVS) gesture classification as a benchmark. The results indicate
that we can achieve 87.5% accuracy which is comparable with Deep Learning (DL),
but at smaller training/inference data number ratio 1.5:1. And only a single
training epoch is required during the learning process. Meanwhile, to the best
of our knowledge, this is the highest accuracy among the non-backpropagation
algorithm based SNNs. At last, we conclude the few-shot learning paradigm of
the developed network: 1) a hierarchical structure-based network design
involves human prior knowledge; 2) SNNs for content based global dynamic
feature detection.
Related papers
- NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance [0.0]
We propose a zero-cost proxy Network Expressivity by Activation Rank (NEAR) to identify the optimal neural network without training.
We demonstrate the cutting-edge correlation between this network score and the model accuracy on NAS-Bench-101 and NATS-Bench-SSS/TSS.
arXiv Detail & Related papers (2024-08-16T14:38:14Z) - Bidirectional Progressive Neural Networks with Episodic Return Progress
for Emergent Task Sequencing and Robotic Skill Transfer [1.7205106391379026]
We introduce a novel multi-task reinforcement learning framework named Episodic Return Progress with Bidirectional Progressive Neural Networks (ERP-BPNN)
The proposed ERP-BPNN model learns in a human-like interleaved manner by (2) autonomous task switching based on a novel intrinsic motivation signal.
We show that ERP-BPNN achieves faster cumulative convergence and improves performance in all metrics considered among morphologically different robots compared to the baselines.
arXiv Detail & Related papers (2024-03-06T19:17:49Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - An Unsupervised STDP-based Spiking Neural Network Inspired By
Biologically Plausible Learning Rules and Connections [10.188771327458651]
Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly.
We design an adaptive synaptic filter and introduce the adaptive spiking threshold to enrich the representation ability of SNNs.
Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets.
arXiv Detail & Related papers (2022-07-06T14:53:32Z) - RoSGAS: Adaptive Social Bot Detection with Reinforced Self-Supervised
GNN Architecture Search [12.567692688720353]
Social bots are automated accounts on social networks that make attempts to behave like human.
In this paper, we propose RoSGAS, a novel Reinforced and Self-supervised GNN Architecture Search framework.
We exploit heterogeneous information network to present the user connectivity by leveraging account metadata, relationships, behavioral features and content features.
Experiments on 5 Twitter datasets show that RoSGAS outperforms the state-of-the-art approaches in terms of accuracy, training efficiency and stability.
arXiv Detail & Related papers (2022-06-14T11:12:02Z) - An STDP-Based Supervised Learning Algorithm for Spiking Neural Networks [20.309112286222238]
Spiking Neural Networks (SNN) provide a more biological plausible model for the brain.
We propose a supervised learning algorithm based on Spike-Timing Dependent Plasticity (STDP) for a hierarchical SNN consisting of Leaky Integrate-and-fire neurons.
arXiv Detail & Related papers (2022-03-07T13:40:09Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.