Convolutional Neural Networks with A Topographic Representation Module
for EEG-Based Brain-Computer Interfaces
- URL: http://arxiv.org/abs/2208.10708v1
- Date: Tue, 23 Aug 2022 03:20:51 GMT
- Title: Convolutional Neural Networks with A Topographic Representation Module
for EEG-Based Brain-Computer Interfaces
- Authors: Xinbin Liang, Yaru Liu, Yang Yu, Kaixuan Liu, Yadong Liu and Zongtan
Zhou
- Abstract summary: Convolutional Neural Networks (CNNs) have shown great potential in the field of Brain-Computer Interface (BCI)
We propose an EEG Topographic Representation Module (TRM)
TRM consists of (1) a mapping block from the raw EEG signal to a 3-D topographic map and (2) a convolution block from the topographic map to an output of the same size as the input.
- Score: 4.269859225062717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: Convolutional Neural Networks (CNNs) have shown great potential in
the field of Brain-Computer Interface (BCI) due to their ability to directly
process the raw Electroencephalogram (EEG) without artificial feature
extraction. The raw EEG signal is usually represented as 2-Dimensional (2-D)
matrix composed of channels and time points, which ignores the spatial
topological information of EEG. Our goal is to make the CNN with the raw EEG
signal as input have the ability to learn the EEG spatial topological features,
and improve its classification performance while essentially maintaining its
original structure. Methods: We propose an EEG Topographic Representation
Module (TRM). This module consists of (1) a mapping block from the raw EEG
signal to a 3-D topographic map and (2) a convolution block from the
topographic map to an output of the same size as the input. We embed the TRM to
3 widely used CNNs, and tested them on 2 different types of publicly available
datasets. Results: The results show that the classification accuracies of the 3
CNNs are improved on both datasets after using TRM. The average classification
accuracies of DeepConvNet, EEGNet and ShallowConvNet with TRM are improved by
4.70\%, 1.29\% and 0.91\% on Emergency Braking During Simulated Driving Dataset
(EBDSDD), and 2.83\%, 2.17\% and 2.00\% on High Gamma Dataset (HGD),
respectively. Significance: By using TRM to mine the spatial topological
features of EEG, we improve the classification performance of 3 CNNs on 2
datasets. In addition,since the output of TRM has the same size as the input,
any CNN with the raw EEG signal as input can use this module without changing
the original structure.
Related papers
- 3D-CLMI: A Motor Imagery EEG Classification Model via Fusion of 3D-CNN
and LSTM with Attention [0.174048653626208]
This paper proposed a model that combined a three-dimensional convolutional neural network (CNN) with a long short-term memory (LSTM) network to classify motor imagery (MI) signals.
Experimental results showed that this model achieved a classification accuracy of 92.7% and an F1-score of 0.91 on the public dataset BCI Competition IV dataset 2a.
The model greatly improved the classification accuracy of users' motor imagery intentions, giving brain-computer interfaces better application prospects in emerging fields such as autonomous vehicles and medical rehabilitation.
arXiv Detail & Related papers (2023-12-20T03:38:24Z) - A Dynamic Domain Adaptation Deep Learning Network for EEG-based Motor
Imagery Classification [1.7465786776629872]
We propose a Dynamic Domain Adaptation Based Deep Learning Network (DADL-Net)
First, the EEG data is mapped to the three-dimensional geometric space and its temporal-spatial features are learned through the 3D convolution module.
The accuracy rates of 70.42% and 73.91% were achieved on the OpenBMI and BCIC IV 2a datasets.
arXiv Detail & Related papers (2023-09-21T01:34:00Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking [72.65494220685525]
We propose a new dynamic modality-aware filter generation module (named MFGNet) to boost the message communication between visible and thermal data.
We generate dynamic modality-aware filters with two independent networks. The visible and thermal filters will be used to conduct a dynamic convolutional operation on their corresponding input feature maps respectively.
To address issues caused by heavy occlusion, fast motion, and out-of-view, we propose to conduct a joint local and global search by exploiting a new direction-aware target-driven attention mechanism.
arXiv Detail & Related papers (2021-07-22T03:10:51Z) - Transformer-based Spatial-Temporal Feature Learning for EEG Decoding [4.8276709243429]
We propose a novel EEG decoding method that mainly relies on the attention mechanism.
We have reached the level of the state-of-the-art in multi-classification of EEG, with fewer parameters.
It has good potential to promote the practicality of brain-computer interface (BCI)
arXiv Detail & Related papers (2021-06-11T00:48:18Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - SE-ECGNet: A Multi-scale Deep Residual Network with
Squeeze-and-Excitation Module for ECG Signal Classification [6.124438924401066]
We develop a multi-scale deep residual network for the ECG signal classification task.
We are the first to propose to treat the multi-lead signal as a 2-dimensional matrix.
Our proposed model achieves 99.2% F1-score in the MIT-BIH dataset and 89.4% F1-score in Alibaba dataset.
arXiv Detail & Related papers (2020-12-10T08:37:44Z) - Convolutional Neural Networks for Automatic Detection of Artifacts from
Independent Components Represented in Scalp Topographies of EEG Signals [9.088303226909279]
Artifacts, due to eye movements and blink, muscular/cardiac activity and generic electrical disturbances, have to be recognized and eliminated.
ICA is effective to split the signal into independent components (ICs) whose re-projections on 2D scalp topographies (images) allow to recognize/separate artifacts and by UBS.
We present a completely automatic and effective framework for EEG artifact recognition by IC topoplots, based on 2D Convolutional Neural Networks (CNNs)
Experiments have shown an overall accuracy of above 98%, employing 1.4 sec on a standard PC to classify 32 topoplots
arXiv Detail & Related papers (2020-09-08T12:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.