EMG subspace alignment and visualization for cross-subject hand gesture
classification
- URL: http://arxiv.org/abs/2401.05386v1
- Date: Mon, 18 Dec 2023 14:32:29 GMT
- Title: EMG subspace alignment and visualization for cross-subject hand gesture
classification
- Authors: Martin Colot, C\'edric Simar, Mathieu Petieau, Ana Maria Cebolla
Alvarez, Guy Cheron and Gianluca Bontempi
- Abstract summary: The paper discusses and analyses the challenge of cross-subject generalization thanks to an original dataset containing the EMG signals of 14 human subjects during hand gestures.
The experimental results show that, though an accurate generalization based on pooling multiple subjects is hardly achievable, it is possible to improve the cross-subject estimation by identifying a robust low-dimensional subspace for multiple subjects and aligning it to a target subject.
- Score: 0.125828876338076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electromyograms (EMG)-based hand gesture recognition systems are a promising
technology for human/machine interfaces. However, one of their main limitations
is the long calibration time that is typically required to handle new users.
The paper discusses and analyses the challenge of cross-subject generalization
thanks to an original dataset containing the EMG signals of 14 human subjects
during hand gestures. The experimental results show that, though an accurate
generalization based on pooling multiple subjects is hardly achievable, it is
possible to improve the cross-subject estimation by identifying a robust
low-dimensional subspace for multiple subjects and aligning it to a target
subject. A visualization of the subspace enables us to provide insights for the
improvement of cross-subject generalization with EMG signals.
Related papers
- FORS-EMG: A Novel sEMG Dataset for Hand Gesture Recognition Across Multiple Forearm Orientations [1.444899524297657]
Surface electromy (sEMG) signal holds great potential in the research fields of gesture recognition and the development of robust prosthetic hands.
The sEMG signal is compromised with physiological or dynamic factors such as forearm orientations, forearm displacement, limb position, etc.
In this paper, we have proposed a dataset of electrode sEMG signals to evaluate common daily living hand gestures performed with three forearm orientations.
arXiv Detail & Related papers (2024-09-03T14:23:06Z) - PhysMLE: Generalizable and Priors-Inclusive Multi-task Remote Physiological Measurement [24.424510759648072]
This paper presents an end-to-end Mixture of Low-rank Experts for multi-task remote Physiological measurement (PhysMLE)
PhysMLE is based on multiple low-rank experts with a novel router mechanism, enabling the model to adeptly handle both specifications and correlations within tasks.
For fair and comprehensive evaluations, this paper proposed a large-scale multi-task generalization benchmark, named Multi-Source Synsemantic Domain Generalization protocol.
arXiv Detail & Related papers (2024-05-10T02:36:54Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - Improving Vision Anomaly Detection with the Guidance of Language
Modality [64.53005837237754]
This paper tackles the challenges for vision modality from a multimodal point of view.
We propose Cross-modal Guidance (CMG) to tackle the redundant information issue and sparse space issue.
To learn a more compact latent space for the vision anomaly detector, CMLE learns a correlation structure matrix from the language modality.
arXiv Detail & Related papers (2023-10-04T13:44:56Z) - A Deep Learning Sequential Decoder for Transient High-Density
Electromyography in Hand Gesture Recognition Using Subject-Embedded Transfer
Learning [11.170031300110315]
Hand gesture recognition (HGR) has gained significant attention due to the increasing use of AI-powered human-computers.
These interfaces have a range of applications, including the control of extended reality, agile prosthetics, and exoskeletons.
These interfaces have a range of applications, including the control of extended reality, agile prosthetics, and exoskeletons.
arXiv Detail & Related papers (2023-09-23T05:32:33Z) - A Multi-label Classification Approach to Increase Expressivity of
EMG-based Gesture Recognition [4.701158597171363]
The aim of this study is to efficiently increase the expressivity of surface electromyography-based (sEMG) gesture recognition systems.
We use a problem transformation approach, in which actions were subset into two biomechanically independent components.
arXiv Detail & Related papers (2023-09-13T20:21:41Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks [2.76240219662896]
A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
arXiv Detail & Related papers (2021-06-07T09:41:15Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Detecting Human-Object Interaction via Fabricated Compositional Learning [106.37536031160282]
Human-Object Interaction (HOI) detection is a fundamental task for high-level scene understanding.
Human has extremely powerful compositional perception ability to cognize rare or unseen HOI samples.
We propose Fabricated Compositional Learning (FCL) to address the problem of open long-tailed HOI detection.
arXiv Detail & Related papers (2021-03-15T08:52:56Z) - Novel Human-Object Interaction Detection via Adversarial Domain
Generalization [103.55143362926388]
We study the problem of novel human-object interaction (HOI) detection, aiming at improving the generalization ability of the model to unseen scenarios.
The challenge mainly stems from the large compositional space of objects and predicates, which leads to the lack of sufficient training data for all the object-predicate combinations.
We propose a unified framework of adversarial domain generalization to learn object-invariant features for predicate prediction.
arXiv Detail & Related papers (2020-05-22T22:02:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.