HYDRA-HGR: A Hybrid Transformer-based Architecture for Fusion of
Macroscopic and Microscopic Neural Drive Information
- URL: http://arxiv.org/abs/2211.02619v1
- Date: Thu, 27 Oct 2022 02:23:27 GMT
- Title: HYDRA-HGR: A Hybrid Transformer-based Architecture for Fusion of
Macroscopic and Microscopic Neural Drive Information
- Authors: Mansooreh Montazerin, Elahe Rahimian, Farnoosh Naderkhani, S. Farokh
Atashzar, Hamid Alinejad-Rokny, Arash Mohammadi
- Abstract summary: We propose a hybrid model that simultaneously extracts a set of temporal and spatial features at microscopic level.
The proposed HYDRA-HGR framework achieves average accuracy of 94.86% for the 250 ms window size, which is 5.52% and 8.22% higher than that of the Macro and Micro paths, respectively.
- Score: 11.443553761853856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Development of advance surface Electromyogram (sEMG)-based Human-Machine
Interface (HMI) systems is of paramount importance to pave the way towards
emergence of futuristic Cyber-Physical-Human (CPH) worlds. In this context, the
main focus of recent literature was on development of different Deep Neural
Network (DNN)-based architectures that perform Hand Gesture Recognition (HGR)
at a macroscopic level (i.e., directly from sEMG signals). At the same time,
advancements in acquisition of High-Density sEMG signals (HD-sEMG) have
resulted in a surge of significant interest on sEMG decomposition techniques to
extract microscopic neural drive information. However, due to complexities of
sEMG decomposition and added computational overhead, HGR at microscopic level
is less explored than its aforementioned DNN-based counterparts. In this
regard, we propose the HYDRA-HGR framework, which is a hybrid model that
simultaneously extracts a set of temporal and spatial features through its two
independent Vision Transformer (ViT)-based parallel architectures (the so
called Macro and Micro paths). The Macro Path is trained directly on the
pre-processed HD-sEMG signals, while the Micro path is fed with the p-to-p
values of the extracted Motor Unit Action Potentials (MUAPs) of each source.
Extracted features at macroscopic and microscopic levels are then coupled via a
Fully Connected (FC) fusion layer. We evaluate the proposed hybrid HYDRA-HGR
framework through a recently released HD-sEMG dataset, and show that it
significantly outperforms its stand-alone counterparts. The proposed HYDRA-HGR
framework achieves average accuracy of 94.86% for the 250 ms window size, which
is 5.52% and 8.22% higher than that of the Macro and Micro paths, respectively.
Related papers
- CryoFM: A Flow-based Foundation Model for Cryo-EM Densities [50.291974465864364]
We present CryoFM, a foundation model designed as a generative model, learning the distribution of high-quality density maps.
Built on flow matching, CryoFM is trained to accurately capture the prior distribution of biomolecular density maps.
arXiv Detail & Related papers (2024-10-11T08:53:58Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - Geodesic Optimization for Predictive Shift Adaptation on EEG data [53.58711912565724]
Domain adaptation methods struggle when distribution shifts occur simultaneously in $X$ and $y$.
This paper proposes a novel method termed Geodesic Optimization for Predictive Shift Adaptation (GOPSA) to address test-time multi-source DA.
GOPSA has the potential to combine the advantages of mixed-effects modeling with machine learning for biomedical applications of EEG.
arXiv Detail & Related papers (2024-07-04T12:15:42Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - From Unimodal to Multimodal: improving sEMG-Based Pattern Recognition
via deep generative models [1.1477981286485912]
Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy compared to unimodal HGR systems.
This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals.
arXiv Detail & Related papers (2023-08-08T07:15:23Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Transformer-based Hand Gesture Recognition via High-Density EMG Signals:
From Instantaneous Recognition to Fusion of Motor Unit Spike Trains [11.443553761853856]
The paper proposes a compact deep learning framework referred to as the CT-HGR, which employs a vision transformer network to conduct hand gesture recognition.
CT-HGR can be trained from scratch without any need for transfer learning and can simultaneously extract both temporal and spatial features of HD-sEMG data.
The framework achieves accuracy of 89.13% for instantaneous recognition based on a single frame of HD-sEMG image.
arXiv Detail & Related papers (2022-11-29T23:32:08Z) - Light-weighted CNN-Attention based architecture for Hand Gesture
Recognition via ElectroMyography [19.51045409936039]
We propose a light-weighted hybrid architecture (HDCAM) based on Convolutional Neural Network (CNN) and attention mechanism.
The proposed HDCAM model with 58,441 parameters reached a new state-of-the-art (SOTA) performance with 82.91% and 81.28% accuracy on window sizes of 300 ms and 200 ms for classifying 17 hand gestures.
arXiv Detail & Related papers (2022-10-27T02:12:07Z) - ConTraNet: A single end-to-end hybrid network for EEG-based and
EMG-based human machine interfaces [0.0]
We introduce a single hybrid model called ConTraNet, which is based on CNN and Transformer architectures.
ConTraNet is robust to learn distinct features from different HMI paradigms and generalizes well as compared to the current state of the art algorithms.
arXiv Detail & Related papers (2022-06-21T18:55:50Z) - Graph Neural Networks for Microbial Genome Recovery [64.91162205624848]
We propose to use Graph Neural Networks (GNNs) to leverage the assembly graph when learning contig representations for metagenomic binning.
Our method, VaeG-Bin, combines variational autoencoders for learning latent representations of the individual contigs, with GNNs for refining these representations by taking into account the neighborhood structure of the contigs in the assembly graph.
arXiv Detail & Related papers (2022-04-26T12:49:51Z) - TraHGR: Transformer for Hand Gesture Recognition via ElectroMyography [19.51045409936039]
We propose a hybrid framework based on the Transformer for Hand Gesture Recognition (TraHGR)
TraHGR consists of two parallel paths followed by a linear layer that acts as a fusion center to integrate the advantage of each module.
We have conducted extensive set of experiments to test and validate the proposed TraHGR architecture.
arXiv Detail & Related papers (2022-03-28T15:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.