From Unimodal to Multimodal: improving sEMG-Based Pattern Recognition
via deep generative models
- URL: http://arxiv.org/abs/2308.04091v2
- Date: Mon, 18 Sep 2023 02:44:26 GMT
- Title: From Unimodal to Multimodal: improving sEMG-Based Pattern Recognition
via deep generative models
- Authors: Wentao Wei, Linyan Ren
- Abstract summary: Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy compared to unimodal HGR systems.
This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals.
- Score: 1.1477981286485912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: Multimodal hand gesture recognition (HGR) systems can achieve
higher recognition accuracy compared to unimodal HGR systems. However,
acquiring multimodal gesture recognition data typically requires users to wear
additional sensors, thereby increasing hardware costs. Methods: This paper
proposes a novel generative approach to improve Surface Electromyography
(sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals.
Specifically, we trained a deep generative model based on the intrinsic
correlation between forearm sEMG signals and forearm IMU signals to generate
virtual forearm IMU signals from the input forearm sEMG signals at first.
Subsequently, the sEMG signals and virtual IMU signals were fed into a
multimodal Convolutional Neural Network (CNN) model for gesture recognition.
Results: We conducted evaluations on six databases, including five publicly
available databases and our collected database comprising 28 subjects
performing 38 gestures, containing both sEMG and IMU data. The results show
that our proposed approach significantly outperforms the sEMG-based unimodal
HGR approach (with increases of 2.15%-13.10%). Moreover, it achieves accuracy
levels closely matching those of multimodal HGR when using virtual Acceleration
(ACC) signals. Conclusion: It demonstrates that incorporating virtual IMU
signals, generated by deep generative models, can significantly improve the
accuracy of sEMG-based HGR. Significance: The proposed approach represents a
successful attempt to bridge the gap between unimodal HGR and multimodal HGR
without additional sensor hardware, which can help to promote further
development of natural and cost-effective myoelectric interfaces in the
biomedical engineering field.
Related papers
- emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography [47.160223334501126]
emg2qwerty is a large-scale dataset of non-invasive electromyographic signals recorded at the wrists while touch typing on a QWERTY keyboard.
With 1,135 sessions spanning 108 users and 346 hours of recording, this is the largest such public dataset to date.
We show strong baseline performance on predicting key-presses using sEMG signals alone.
arXiv Detail & Related papers (2024-10-26T05:18:48Z) - FORS-EMG: A Novel sEMG Dataset for Hand Gesture Recognition Across Multiple Forearm Orientations [1.444899524297657]
Surface electromy (sEMG) signal holds great potential in the research fields of gesture recognition and the development of robust prosthetic hands.
The sEMG signal is compromised with physiological or dynamic factors such as forearm orientations, forearm displacement, limb position, etc.
In this paper, we have proposed a dataset of electrode sEMG signals to evaluate common daily living hand gestures performed with three forearm orientations.
arXiv Detail & Related papers (2024-09-03T14:23:06Z) - SpGesture: Source-Free Domain-adaptive sEMG-based Gesture Recognition with Jaccard Attentive Spiking Neural Network [18.954398018873682]
Surface electromyography (sEMG) based gesture recognition offers a natural and intuitive interaction modality for wearable devices.
Existing methods often suffer from high computational latency and increased energy consumption.
We propose a novel SpGesture framework based on Spiking Neural Networks.
arXiv Detail & Related papers (2024-05-23T10:15:29Z) - G-MEMP: Gaze-Enhanced Multimodal Ego-Motion Prediction in Driving [71.9040410238973]
We focus on inferring the ego trajectory of a driver's vehicle using their gaze data.
Next, we develop G-MEMP, a novel multimodal ego-trajectory prediction network that combines GPS and video input with gaze data.
The results show that G-MEMP significantly outperforms state-of-the-art methods in both benchmarks.
arXiv Detail & Related papers (2023-12-13T23:06:30Z) - EMGTFNet: Fuzzy Vision Transformer to decode Upperlimb sEMG signals for
Hand Gestures Recognition [0.1611401281366893]
We propose a Vision Transformer (ViT) based architecture with a Fuzzy Neural Block (FNB) called EMGTFNet to perform Hand Gesture Recognition.
The accuracy of the proposed model is tested using the publicly available NinaPro database consisting of 49 different hand gestures.
arXiv Detail & Related papers (2023-09-23T18:55:26Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - HYDRA-HGR: A Hybrid Transformer-based Architecture for Fusion of
Macroscopic and Microscopic Neural Drive Information [11.443553761853856]
We propose a hybrid model that simultaneously extracts a set of temporal and spatial features at microscopic level.
The proposed HYDRA-HGR framework achieves average accuracy of 94.86% for the 250 ms window size, which is 5.52% and 8.22% higher than that of the Macro and Micro paths, respectively.
arXiv Detail & Related papers (2022-10-27T02:23:27Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge
Graph Summarization [64.56399911605286]
We propose SumGNN: knowledge summarization graph neural network, which is enabled by a subgraph extraction module.
SumGNN outperforms the best baseline by up to 5.54%, and the performance gain is particularly significant in low data relation types.
arXiv Detail & Related papers (2020-10-04T00:14:57Z) - Transfer Learning for sEMG-based Hand Gesture Classification using Deep
Learning in a Master-Slave Architecture [0.0]
The proposed work presents a novel sequential master-slave architecture consisting of deep neural networks (DNNs) for classification of signs from the Indian sign language using signals recorded from multiple sEMG channels.
Up to 14% improvement is observed in the conventional DNN and up to 9% improvement in master-slave network on addition of synthetic data with an average accuracy value of 93.5% asserting the suitability of the proposed approach.
arXiv Detail & Related papers (2020-04-27T01:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.