A Comparative Study of EMG- and IMU-based Gesture Recognition at the Wrist and Forearm
- URL: http://arxiv.org/abs/2512.07997v1
- Date: Mon, 08 Dec 2025 19:36:10 GMT
- Title: A Comparative Study of EMG- and IMU-based Gesture Recognition at the Wrist and Forearm
- Authors: Soroush Baghernezhad, Elaheh Mohammadreza, Vinicius Prado da Fonseca, Ting Zou, Xianta Jiang,
- Abstract summary: IMU signals contain sufficient information to serve as the sole input sensor for static gesture recognition.<n> tendon-induced micro-movement captured by IMUs is a major contributor to static gesture recognition.
- Score: 3.990794855710089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gestures are an integral part of our daily interactions with the environment. Hand gesture recognition (HGR) is the process of interpreting human intent through various input modalities, such as visual data (images and videos) and bio-signals. Bio-signals are widely used in HGR due to their ability to be captured non-invasively via sensors placed on the arm. Among these, surface electromyography (sEMG), which measures the electrical activity of muscles, is the most extensively studied modality. However, less-explored alternatives such as inertial measurement units (IMUs) can provide complementary information on subtle muscle movements, which makes them valuable for gesture recognition. In this study, we investigate the potential of using IMU signals from different muscle groups to capture user intent. Our results demonstrate that IMU signals contain sufficient information to serve as the sole input sensor for static gesture recognition. Moreover, we compare different muscle groups and check the quality of pattern recognition on individual muscle groups. We further found that tendon-induced micro-movement captured by IMUs is a major contributor to static gesture recognition. We believe that leveraging muscle micro-movement information can enhance the usability of prosthetic arms for amputees. This approach also offers new possibilities for hand gesture recognition in fields such as robotics, teleoperation, sign language interpretation, and beyond.
Related papers
- Reconstruction of Surface EMG Signal using IMU data for Upper Limb Actions [0.7359962178534359]
This paper investigates the synthesis of normalized sEMG signals from 6-axis IMU data using a deep learning approach.<n>A Sliding-Window-Wave-Net model, based on dilated causal convolutions, was trained to map the IMU data to the sEMG signal.<n>The results show that the model successfully predicts the timing and general shape of muscle activations.
arXiv Detail & Related papers (2025-11-21T12:26:33Z) - CAST-Phys: Contactless Affective States Through Physiological signals Database [74.28082880875368]
The lack of affective multi-modal datasets remains a major bottleneck in developing accurate emotion recognition systems.<n>We present the Contactless Affective States Through Physiological Signals Database (CAST-Phys), a novel high-quality dataset capable of remote physiological emotion recognition.<n>Our analysis highlights the crucial role of physiological signals in realistic scenarios where facial expressions alone may not provide sufficient emotional information.
arXiv Detail & Related papers (2025-07-08T15:20:24Z) - BrainOmni: A Brain Foundation Model for Unified EEG and MEG Signals [46.121056431476156]
This paper proposes Brain Omni, the first brain foundation model that generalises across heterogeneous EEG and MEG recordings.<n>Existing approaches typically rely on separate, modality- and dataset-specific models, which limits performance and cross-domain scalability.<n>A total of 1,997 hours of EEG and 656 hours of MEG data are curated and standardised from publicly available sources for pretraining.
arXiv Detail & Related papers (2025-05-18T14:07:14Z) - emg2pose: A Large and Diverse Benchmark for Surface Electromyographic Hand Pose Estimation [12.566524562446467]
Reliable and always-available hand pose inference could yield new and intuitive control schemes for human-computer interactions.<n>Wearable wrist-based surface electromyography (sEMG) presents a promising alternative.<n>emg2pose is the largest publicly available dataset of high-quality hand pose labels and wrist sEMG recordings.
arXiv Detail & Related papers (2024-12-02T23:39:37Z) - emg2qwerty: A Large Dataset with Baselines for Touch Typing using Surface Electromyography [47.160223334501126]
emg2qwerty is a large-scale dataset of non-invasive electromyographic signals recorded at the wrists while touch typing on a QWERTY keyboard.<n>With 1,135 sessions spanning 108 users and 346 hours of recording, this is the largest such public dataset to date.<n>We show strong baseline performance on predicting key-presses using sEMG signals alone.
arXiv Detail & Related papers (2024-10-26T05:18:48Z) - Masked Video and Body-worn IMU Autoencoder for Egocentric Action Recognition [24.217068565936117]
We present a novel method for action recognition that integrates motion data from body-worn IMUs with egocentric video.
To model the complex relation of multiple IMU devices placed across the body, we exploit the collaborative dynamics in multiple IMU devices.
Experiments show our method can achieve state-of-the-art performance on multiple public datasets.
arXiv Detail & Related papers (2024-07-09T07:53:16Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Continuous Decoding of Daily-Life Hand Movements from Forearm Muscle
Activity for Enhanced Myoelectric Control of Hand Prostheses [78.120734120667]
We introduce a novel method, based on a long short-term memory (LSTM) network, to continuously map forearm EMG activity onto hand kinematics.
Ours is the first reported work on the prediction of hand kinematics that uses this challenging dataset.
Our results suggest that the presented method is suitable for the generation of control signals for the independent and proportional actuation of the multiple DOFs of state-of-the-art hand prostheses.
arXiv Detail & Related papers (2021-04-29T00:11:32Z) - Multimodal Fusion of EMG and Vision for Human Grasp Intent Inference in
Prosthetic Hand Control [11.400385533782204]
We present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, eye-gaze, and EMG from the forearm.
We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it.
arXiv Detail & Related papers (2021-04-08T17:01:19Z) - Heterogeneous Hand Guise Classification Based on Surface
Electromyographic Signals Using Multichannel Convolutional Neural Network [0.0]
Recent developments in the field of Machine Learning allow us to use EMG signals to teach machines the complex properties of human movements.
Modern machines are capable of detecting numerous human activities and distinguishing among them solely based on the EMG signals produced by those activities.
In this study, a novel classification method has been described employing a multichannel Convolutional Neural Network (CNN) that interprets surface EMG signals by the properties they exhibit in the power domain.
arXiv Detail & Related papers (2021-01-17T17:02:04Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Video-based Remote Physiological Measurement via Cross-verified Feature
Disentangling [121.50704279659253]
We propose a cross-verified feature disentangling strategy to disentangle the physiological features with non-physiological representations.
We then use the distilled physiological features for robust multi-task physiological measurements.
The disentangled features are finally used for the joint prediction of multiple physiological signals like average HR values and r signals.
arXiv Detail & Related papers (2020-07-16T09:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.