High-speed Low-consumption sEMG-based Transient-state micro-Gesture
Recognition
- URL: http://arxiv.org/abs/2403.06998v2
- Date: Wed, 13 Mar 2024 01:19:40 GMT
- Title: High-speed Low-consumption sEMG-based Transient-state micro-Gesture
Recognition
- Authors: Youfang Han, Wei Zhao, Xiangjin Chen, Xin Meng
- Abstract summary: The accuracy of the proposed SNN is 83.85% and 93.52% on the two datasets respectively.
The methods can be used for precise, high-speed, and low-power micro-gesture recognition tasks.
- Score: 6.649481653007372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gesture recognition on wearable devices is extensively applied in
human-computer interaction. Electromyography (EMG) has been used in many
gesture recognition systems for its rapid perception of muscle signals.
However, analyzing EMG signals on devices, like smart wristbands, usually needs
inference models to have high performances, such as low inference latency, low
power consumption, and low memory occupation. Therefore, this paper proposes an
improved spiking neural network (SNN) to achieve these goals. We propose an
adaptive multi-delta coding as a spiking coding method to improve recognition
accuracy. We propose two additive solvers for SNN, which can reduce inference
energy consumption and amount of parameters significantly, and improve the
robustness of temporal differences. In addition, we propose a linear action
detection method TAD-LIF, which is suitable for SNNs. TAD-LIF is an improved
LIF neuron that can detect transient-state gestures quickly and accurately. We
collected two datasets from 20 subjects including 6 micro gestures. The
collection devices are two designed lightweight consumer-level sEMG wristbands
(3 and 8 electrode channels respectively). Compared to CNN, FCN, and normal
SNN-based methods, the proposed SNN has higher recognition accuracy. The
accuracy of the proposed SNN is 83.85% and 93.52% on the two datasets
respectively. In addition, the inference latency of the proposed SNN is about
1% of CNN, the power consumption is about 0.1% of CNN, and the memory
occupation is about 20% of CNN. The proposed methods can be used for precise,
high-speed, and low-power micro-gesture recognition tasks, and are suitable for
consumer-level intelligent wearable devices, which is a general way to achieve
ubiquitous computing.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - A Methodology for Improving Accuracy of Embedded Spiking Neural Networks through Kernel Size Scaling [6.006032394972252]
Spiking Neural Networks (SNNs) can offer ultra low power/ energy consumption for machine learning-based applications.
Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy.
We propose a novel methodology that improves the accuracy of SNNs through kernel size scaling.
arXiv Detail & Related papers (2024-04-02T06:42:14Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - Analyzing the Impact of Varied Window Hyper-parameters on Deep CNN for
sEMG based Motion Intent Classification [0.0]
This study investigates the relationship between window length and overlap, which may influence the generation of robust raw EMG 2-dimensional (2D) signals for application in CNN.
Findings suggest that a combination of 75% overlap in 2D EMG signals and wider network kernels may provide ideal motor intents classification for adequate EMG-CNN based prostheses control scheme.
arXiv Detail & Related papers (2022-09-13T08:14:49Z) - Low Power Neuromorphic EMG Gesture Classification [3.8761525368152725]
Spiking Neural Networks (SNNs) are promising for low-power, real-time EMG gesture recognition.
We present low-power, high accuracy demonstration of EMG-signal based gesture recognition using neuromorphic Recurrent Spiking Neural Networks (RSNN)
Our network achieves state-of-the-art accuracy classification (90%) while using 53% than best reported art on Roshambo EMG dataset.
arXiv Detail & Related papers (2022-06-04T22:09:34Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.