Online Transformers with Spiking Neurons for Fast Prosthetic Hand
Control
- URL: http://arxiv.org/abs/2303.11860v1
- Date: Tue, 21 Mar 2023 13:59:35 GMT
- Title: Online Transformers with Spiking Neurons for Fast Prosthetic Hand
Control
- Authors: Nathan Leroux, Jan Finkbeiner, Emre Neftci
- Abstract summary: In this paper, instead of the self-attention mechanism, we use a sliding window attention mechanism.
We show that this mechanism is more efficient for continuous signals with finite-range dependencies between input and target.
Our results hold great promises for accurate and fast online processing of sEMG signals for smooth prosthetic hand control.
- Score: 1.6114012813668934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformers are state-of-the-art networks for most sequence processing
tasks. However, the self-attention mechanism often used in Transformers
requires large time windows for each computation step and thus makes them less
suitable for online signal processing compared to Recurrent Neural Networks
(RNNs). In this paper, instead of the self-attention mechanism, we use a
sliding window attention mechanism. We show that this mechanism is more
efficient for continuous signals with finite-range dependencies between input
and target, and that we can use it to process sequences element-by-element,
this making it compatible with online processing. We test our model on a finger
position regression dataset (NinaproDB8) with Surface Electromyographic (sEMG)
signals measured on the forearm skin to estimate muscle activities. Our
approach sets the new state-of-the-art in terms of accuracy on this dataset
while requiring only very short time windows of 3.5 ms at each inference step.
Moreover, we increase the sparsity of the network using Leaky-Integrate and
Fire (LIF) units, a bio-inspired neuron model that activates sparsely in time
solely when crossing a threshold. We thus reduce the number of synaptic
operations up to a factor of $\times5.3$ without loss of accuracy. Our results
hold great promises for accurate and fast online processing of sEMG signals for
smooth prosthetic hand control and is a step towards Transformers and Spiking
Neural Networks (SNNs) co-integration for energy efficient temporal signal
processing.
Related papers
- ARNN: Attentive Recurrent Neural Network for Multi-channel EEG Signals
to Identify Epileptic Seizures [2.8244056068360095]
We propose an Attentive Recurrent Neural Network (ARNN), which recurrently applies attention layers along a sequence.
The proposed model operates on multi-channel EEG signals rather than single channel signals and leverages parallel computation.
arXiv Detail & Related papers (2024-03-05T19:15:17Z) - Low-power event-based face detection with asynchronous neuromorphic
hardware [2.0774873363739985]
We present the first instance of an on-chip spiking neural network for event-based face detection deployed on the SynSense Speck neuromorphic chip.
We show how to reduce precision discrepancies between off-chip clock-driven simulation used for training and on-chip event-driven inference.
We achieve an on-chip face detection mAP[0.5] of 0.6 while consuming only 20 mW.
arXiv Detail & Related papers (2023-12-21T19:23:02Z) - NAC-TCN: Temporal Convolutional Networks with Causal Dilated
Neighborhood Attention for Emotion Understanding [60.74434735079253]
We propose a method known as Neighborhood Attention with Convolutions TCN (NAC-TCN)
We accomplish this by introducing a causal version of Dilated Neighborhood Attention while incorporating it with convolutions.
Our model achieves comparable, better, or state-of-the-art performance over TCNs, TCAN, LSTMs, and GRUs while requiring fewer parameters on standard emotion recognition datasets.
arXiv Detail & Related papers (2023-12-12T18:41:30Z) - DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous
spiking neural network processor [2.9175555050594975]
We present a brain-inspired platform for prototyping real-time event-based Spiking Neural Networks (SNNs)
The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays.
The flexibility to emulate different biologically plausible neural networks, and the chip's ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
arXiv Detail & Related papers (2023-10-01T03:48:16Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - RF-Photonic Deep Learning Processor with Shannon-Limited Data Movement [0.0]
Optical neural networks (ONNs) are promising accelerators with ultra-low latency and energy consumption.
We introduce our multiplicative analog frequency transform ONN (MAFT-ONN) that encodes the data in the frequency domain.
We experimentally demonstrate the first hardware accelerator that computes fully-analog deep learning on raw RF signals.
arXiv Detail & Related papers (2022-07-08T16:37:13Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - A Spike in Performance: Training Hybrid-Spiking Neural Networks with
Quantized Activation Functions [6.574517227976925]
Spiking Neural Network (SNN) is a promising approach to energy-efficient computing.
We show how to maintain state-of-the-art accuracy when converting a non-spiking network into an SNN.
arXiv Detail & Related papers (2020-02-10T05:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.