EEG-EMG FAConformer: Frequency Aware Conv-Transformer for the fusion of EEG and EMG
- URL: http://arxiv.org/abs/2409.18973v1
- Date: Thu, 12 Sep 2024 14:08:56 GMT
- Title: EEG-EMG FAConformer: Frequency Aware Conv-Transformer for the fusion of EEG and EMG
- Authors: ZhengXiao He, Minghong Cai, Letian Li, Siyuan Tian, Ren-Jie Dai,
- Abstract summary: Motor pattern recognition paradigms are the main forms of Brain-Computer Interfaces aimed at motor function rehabilitation.
Electromyography (EMG) signals are the most direct physiological signals that can assess the execution of movements.
We introduce a multimodal motion pattern recognition algorithm for EEG and EMG signals: EEG-EMG FAConformer.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motor pattern recognition paradigms are the main forms of Brain-Computer Interfaces(BCI) aimed at motor function rehabilitation and are the most easily promoted applications. In recent years, many researchers have suggested encouraging patients to perform real motor control execution simultaneously in MI-based BCI rehabilitation training systems. Electromyography (EMG) signals are the most direct physiological signals that can assess the execution of movements. Multimodal signal fusion is practically significant for decoding motor patterns. Therefore, we introduce a multimodal motion pattern recognition algorithm for EEG and EMG signals: EEG-EMG FAConformer, a method with several attention modules correlated with temporal and frequency information for motor pattern recognition. We especially devise a frequency band attention module to encode EEG information accurately and efficiently. What's more, modules like Multi-Scale Fusion Module, Independent Channel-Specific Convolution Module(ICSCM), and Fuse Module which can effectively eliminate irrelevant information in EEG and EMG signals and fully exploit hidden dynamics are developed and show great effects. Extensive experiments show that EEG-EMG FAConformer surpasses existing methods on Jeong2020 dataset, showcasing outstanding performance, high robustness and impressive stability.
Related papers
- EEGMamba: Bidirectional State Space Model with Mixture of Experts for EEG Multi-task Classification [1.4004287903552533]
We introduce EEGMamba, the first universal EEG classification network to truly implement multi-task learning for EEG applications.
EEGMamba seamlessly integrates the Spatio-Temporal-Adaptive (ST- adaptive) module, bidirectional Mamba, and Mixture of Experts (MoE) into a unified framework.
We evaluate our model on eight publicly available EEG datasets, and the experimental results demonstrate its superior performance in four types of tasks.
arXiv Detail & Related papers (2024-07-20T11:15:47Z) - SCDM: Unified Representation Learning for EEG-to-fNIRS Cross-Modal Generation in MI-BCIs [6.682531937245544]
Hybrid motor brain-computer interfaces (MI-BCIs) integrate both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) signals.
simultaneously recording EEG and fNIRS signals is highly challenging due to the difficulty of colocating both types of sensors on the same scalp.
This study proposes the spatial-temporal controlled diffusion imagery model (SCDM) as a framework for cross-modal generation from EEG to fNIRS.
arXiv Detail & Related papers (2024-07-01T13:37:23Z) - MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild [81.32127423981426]
Multimodal emotion recognition based on audio and video data is important for real-world applications.
Recent methods have focused on exploiting advances of self-supervised learning (SSL) for pre-training of strong multimodal encoders.
We propose a different perspective on the problem and investigate the advancement of multimodal DFER performance by adapting SSL-pre-trained disjoint unimodal encoders.
arXiv Detail & Related papers (2024-04-13T13:39:26Z) - vEEGNet: learning latent representations to reconstruct EEG raw data via
variational autoencoders [3.031375888004876]
We propose vEEGNet, a DL architecture with two modules, i.e., an unsupervised module based on variational autoencoders to extract a latent representation of the data, and a supervised module based on a feed-forward neural network to classify different movements.
We show state-of-the-art classification performance, and the ability to reconstruct both low-frequency and middle-range components of the raw EEG.
arXiv Detail & Related papers (2023-11-16T19:24:40Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - From Unimodal to Multimodal: improving sEMG-Based Pattern Recognition
via deep generative models [1.1477981286485912]
Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy compared to unimodal HGR systems.
This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals.
arXiv Detail & Related papers (2023-08-08T07:15:23Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - Multi-Scale Control Signal-Aware Transformer for Motion Synthesis
without Phase [72.01862340497314]
We propose a task-agnostic deep learning method, namely Multi-scale Control Signal-aware Transformer (MCS-T)
MCS-T is able to successfully generate motions comparable to those generated by the methods using auxiliary information.
arXiv Detail & Related papers (2023-03-03T02:56:44Z) - Conditional Generative Models for Simulation of EMG During Naturalistic
Movements [45.698312905115955]
We present a conditional generative neural network trained adversarially to generate motor unit activation potential waveforms.
We demonstrate the ability of such a model to predictively interpolate between a much smaller number of numerical model's outputs with a high accuracy.
arXiv Detail & Related papers (2022-11-03T14:49:02Z) - Massive MIMO As an Extreme Learning Machine [83.12538841141892]
A massive multiple-input multiple-output (MIMO) system with low-resolution analog-to-digital converters (ADCs) forms a natural extreme learning machine (ELM)
By adding random biases to the received signals and optimizing the ELM output weights, the system can effectively tackle hardware impairments.
arXiv Detail & Related papers (2020-07-01T04:15:20Z) - Motor Imagery Classification of Single-Arm Tasks Using Convolutional
Neural Network based on Feature Refining [5.620334754517149]
Motor imagery (MI) is commonly used for recovery or rehabilitation of motor functions due to its signal origin.
In this study, we proposed a band-power feature refining convolutional neural network (BFR-CNN) to achieve high classification accuracy.
arXiv Detail & Related papers (2020-02-04T04:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.