Priming Cross-Session Motor Imagery Classification with A Universal Deep
Domain Adaptation Framework
- URL: http://arxiv.org/abs/2202.09559v2
- Date: Wed, 26 Jul 2023 01:36:38 GMT
- Title: Priming Cross-Session Motor Imagery Classification with A Universal Deep
Domain Adaptation Framework
- Authors: Zhengqing Miao, Xin Zhang, Carlo Menon, Yelong Zheng, Meirong Zhao,
Dong Ming
- Abstract summary: Motor imagery (MI) is a common brain computer interface (BCI) paradigm.
We propose a Siamese deep domain adaptation (SDDA) framework for cross-session MI classification based on mathematical models in domain adaptation theory.
The proposed framework can be easily applied to most existing artificial neural networks without altering the network structure.
- Score: 3.6824205556465834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motor imagery (MI) is a common brain computer interface (BCI) paradigm. EEG
is non-stationary with low signal-to-noise, classifying motor imagery tasks of
the same participant from different EEG recording sessions is generally
challenging, as EEG data distribution may vary tremendously among different
acquisition sessions. Although it is intuitive to consider the cross-session MI
classification as a domain adaptation problem, the rationale and feasible
approach is not elucidated. In this paper, we propose a Siamese deep domain
adaptation (SDDA) framework for cross-session MI classification based on
mathematical models in domain adaptation theory. The proposed framework can be
easily applied to most existing artificial neural networks without altering the
network structure, which facilitates our method with great flexibility and
transferability. In the proposed framework, domain invariants were firstly
constructed jointly with channel normalization and Euclidean alignment. Then,
embedding features from source and target domain were mapped into the
Reproducing Kernel Hilbert Space (RKHS) and aligned accordingly. A cosine-based
center loss was also integrated into the framework to improve the
generalizability of the SDDA. The proposed framework was validated with two
classic and popular convolutional neural networks from BCI research field
(EEGNet and ConvNet) in two MI-EEG public datasets (BCI Competition IV IIA,
IIB). Compared to the vanilla EEGNet and ConvNet, the proposed SDDA framework
was able to boost the MI classification accuracy by 15.2%, 10.2% respectively
in IIA dataset, and 5.5%, 4.2% in IIB dataset. The final MI classification
accuracy reached 82.01% in IIA dataset and 87.52% in IIB, which outperformed
the state-of-the-art methods in the literature.
Related papers
- Enhancing Motor Imagery Decoding in Brain Computer Interfaces using
Riemann Tangent Space Mapping and Cross Frequency Coupling [5.860347939369221]
Motor Imagery (MI) serves as a crucial experimental paradigm within the realm of Brain Computer Interfaces (BCIs)
This paper introduces a novel approach to enhance the representation quality and decoding capability pertaining to MI features.
A lightweight convolutional neural network is employed for further feature extraction and classification, operating under the joint supervision of cross-entropy and center loss.
arXiv Detail & Related papers (2023-10-29T23:37:47Z) - Bidirectional Domain Mixup for Domain Adaptive Semantic Segmentation [73.3083304858763]
This paper systematically studies the impact of mixup under the domain adaptaive semantic segmentation task.
In specific, we achieve domain mixup in two-step: cut and paste.
We provide extensive ablation experiments to empirically verify our main components of the framework.
arXiv Detail & Related papers (2023-03-17T05:22:44Z) - IDA: Informed Domain Adaptive Semantic Segmentation [51.12107564372869]
We propose an Domain Informed Adaptation (IDA) model, a self-training framework that mixes the data based on class-level segmentation performance.
In our IDA model, the class-level performance is tracked by an expected confidence score (ECS) and we then use a dynamic schedule to determine the mixing ratio for data in different domains.
Our proposed method is able to outperform the state-of-the-art UDA-SS method by a margin of 1.1 mIoU in the adaptation of GTA-V to Cityscapes and of 0.9 mIoU in the adaptation of SYNTHIA to City
arXiv Detail & Related papers (2023-03-05T18:16:34Z) - Self-semantic contour adaptation for cross modality brain tumor
segmentation [13.260109561599904]
We propose exploiting low-level edge information to facilitate the adaptation as a precursor task.
The precise contour then provides spatial information to guide the semantic adaptation.
We evaluate our framework on the BraTS2018 database for cross-modality segmentation of brain tumors.
arXiv Detail & Related papers (2022-01-13T15:16:55Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - CNN-based Approaches For Cross-Subject Classification in Motor Imagery:
From The State-of-The-Art to DynamicNet [0.2936007114555107]
Motor imagery (MI)-based brain-computer interface (BCI) systems are being increasingly employed to provide alternative means of communication and control.
accurately classifying MI from brain signals is essential to obtain reliable BCI systems.
Deep learning approaches have started to emerge as valid alternatives to standard machine learning techniques.
arXiv Detail & Related papers (2021-05-17T14:57:13Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Common Spatial Generative Adversarial Networks based EEG Data
Augmentation for Cross-Subject Brain-Computer Interface [4.8276709243429]
Cross-subject application of EEG-based brain-computer interface (BCI) has always been limited by large individual difference and complex characteristics that are difficult to perceive.
We propose a cross-subject EEG classification framework with a generative adversarial networks (GANs) based method named common spatial GAN (CS-GAN)
Our framework provides a promising way to deal with the cross-subject problem and promote the practical application of BCI.
arXiv Detail & Related papers (2021-02-08T10:37:03Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared
Person Re-Identification [102.89434996930387]
VI-ReID aims to match cross-modality pedestrian images, breaking through the limitation of single-modality person ReID in dark environment.
Existing works manually design various two-stream architectures to separately learn modality-specific and modality-sharable representations.
We propose a novel method, named Cross-Modality Neural Architecture Search (CM-NAS)
arXiv Detail & Related papers (2021-01-21T07:07:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.