Dual-TSST: A Dual-Branch Temporal-Spectral-Spatial Transformer Model for EEG Decoding
- URL: http://arxiv.org/abs/2409.03251v1
- Date: Thu, 5 Sep 2024 05:08:43 GMT
- Title: Dual-TSST: A Dual-Branch Temporal-Spectral-Spatial Transformer Model for EEG Decoding
- Authors: Hongqi Li, Haodong Zhang, Yitong Chen,
- Abstract summary: We propose a novel decoding architecture network with a dual-branch temporal-spectral-spatial transformer (Dual-TSST)
Our proposed Dual-TSST performs superiorly in various tasks, which achieves the promising EEG classification performance of average accuracy of 80.67%.
This study provides a new approach to high-performance EEG decoding, and has great potential for future CNN-Transformer based applications.
- Score: 2.0721229324537833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The decoding of electroencephalography (EEG) signals allows access to user intentions conveniently, which plays an important role in the fields of human-machine interaction. To effectively extract sufficient characteristics of the multichannel EEG, a novel decoding architecture network with a dual-branch temporal-spectral-spatial transformer (Dual-TSST) is proposed in this study. Specifically, by utilizing convolutional neural networks (CNNs) on different branches, the proposed processing network first extracts the temporal-spatial features of the original EEG and the temporal-spectral-spatial features of time-frequency domain data converted by wavelet transformation, respectively. These perceived features are then integrated by a feature fusion block, serving as the input of the transformer to capture the global long-range dependencies entailed in the non-stationary EEG, and being classified via the global average pooling and multi-layer perceptron blocks. To evaluate the efficacy of the proposed approach, the competitive experiments are conducted on three publicly available datasets of BCI IV 2a, BCI IV 2b, and SEED, with the head-to-head comparison of more than ten other state-of-the-art methods. As a result, our proposed Dual-TSST performs superiorly in various tasks, which achieves the promising EEG classification performance of average accuracy of 80.67% in BCI IV 2a, 88.64% in BCI IV 2b, and 96.65% in SEED, respectively. Extensive ablation experiments conducted between the Dual-TSST and comparative baseline model also reveal the enhanced decoding performance with each module of our proposed method. This study provides a new approach to high-performance EEG decoding, and has great potential for future CNN-Transformer based applications.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - A Temporal-Spectral Fusion Transformer with Subject-Specific Adapter for Enhancing RSVP-BCI Decoding [15.000487099591776]
RSVP-based Brain-Computer Interface (BCI) is an efficient technology for target retrieval using electroencephalography (EEG) signals.
Traditional decoding methods rely on a substantial amount of training data from new test subjects.
We propose a subject-specific adapter to rapidly transfer the knowledge of the model trained on data from existing subjects to decode data from new subjects.
arXiv Detail & Related papers (2024-01-12T03:18:51Z) - 3D-CLMI: A Motor Imagery EEG Classification Model via Fusion of 3D-CNN
and LSTM with Attention [0.174048653626208]
This paper proposed a model that combined a three-dimensional convolutional neural network (CNN) with a long short-term memory (LSTM) network to classify motor imagery (MI) signals.
Experimental results showed that this model achieved a classification accuracy of 92.7% and an F1-score of 0.91 on the public dataset BCI Competition IV dataset 2a.
The model greatly improved the classification accuracy of users' motor imagery intentions, giving brain-computer interfaces better application prospects in emerging fields such as autonomous vehicles and medical rehabilitation.
arXiv Detail & Related papers (2023-12-20T03:38:24Z) - DTP-Net: Learning to Reconstruct EEG signals in Time-Frequency Domain by
Multi-scale Feature Reuse [7.646218090238708]
We present a fully convolutional neural architecture, called DTP-Net, which consists of a Densely Connected Temporal Pyramid (DTP) sandwiched between a pair of learnable time-frequency transformations.
EEG signals are easily corrupted by various artifacts, making artifact removal crucial for improving signal quality in scenarios such as disease diagnosis and brain-computer interface (BCI)
Extensive experiments conducted on two public semi-simulated datasets demonstrate the effective artifact removal performance of DTP-Net.
arXiv Detail & Related papers (2023-11-27T11:09:39Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - A Hybrid Brain-Computer Interface Using Motor Imagery and SSVEP Based on
Convolutional Neural Network [0.9176056742068814]
We propose a two-stream convolutional neural network (TSCNN) based hybrid brain-computer interface.
It combines steady-state visual evoked potential (SSVEP) and motor imagery (MI) paradigms.
TSCNN automatically learns to extract EEG features in the two paradigms in the training process.
arXiv Detail & Related papers (2022-12-10T12:34:36Z) - Multiple Time Series Fusion Based on LSTM An Application to CAP A Phase
Classification Using EEG [56.155331323304]
Deep learning based electroencephalogram channels' feature level fusion is carried out in this work.
Channel selection, fusion, and classification procedures were optimized by two optimization algorithms.
arXiv Detail & Related papers (2021-12-18T14:17:49Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - A Computationally Efficient Multiclass Time-Frequency Common Spatial
Pattern Analysis on EEG Motor Imagery [164.93739293097605]
Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) motor imagery (MI)
This study modifies the conventional CSP algorithm to improve the multi-class MI classification accuracy and ensure the computation process is efficient.
arXiv Detail & Related papers (2020-08-25T18:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.