Spatiotemporal Pooling on Appropriate Topological Maps Represented as
Two-Dimensional Images for EEG Classification
- URL: http://arxiv.org/abs/2403.04353v1
- Date: Thu, 7 Mar 2024 09:35:49 GMT
- Title: Spatiotemporal Pooling on Appropriate Topological Maps Represented as
Two-Dimensional Images for EEG Classification
- Authors: Takuto Fukushima and Ryusuke Miyamoto
- Abstract summary: Motor classification based on electroencephalography (EEG) signals is one of the most important brain-computer interface applications.
This study proposes a novel EEG-based motor imagery classification method with three key features.
Experimental results using the PhysioNet EEG Movement Motor/Imagery dataset showed that the proposed method achieved the best classification accuracy of 88.57%.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motor imagery classification based on electroencephalography (EEG) signals is
one of the most important brain-computer interface applications, although it
needs further improvement. Several methods have attempted to obtain useful
information from EEG signals by using recent deep learning techniques such as
transformers. To improve the classification accuracy, this study proposes a
novel EEG-based motor imagery classification method with three key features:
generation of a topological map represented as a two-dimensional image from EEG
signals with coordinate transformation based on t-SNE, use of the InternImage
to extract spatial features, and use of spatiotemporal pooling inspired by
PoolFormer to exploit spatiotemporal information concealed in a sequence of EEG
images. Experimental results using the PhysioNet EEG Motor Movement/Imagery
dataset showed that the proposed method achieved the best classification
accuracy of 88.57%, 80.65%, and 70.17% on two-, three-, and four-class motor
imagery tasks in cross-individual validation.
Related papers
- ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation [49.42525661521625]
This paper presents ShapeMamba-EM, a specialized fine-tuning method for 3D EM segmentation.
It is tested over a wide range of EM images, covering five segmentation tasks and 10 datasets.
arXiv Detail & Related papers (2024-08-26T08:59:22Z) - Mind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping Contrastive Learning [2.087148326341881]
This paper introduces a MUltimodal Similarity-keeping contrastivE learning framework for zero-shot EEG-based image classification.
We develop a series of multivariate time-series encoders tailored for EEG signals and assess the efficacy of regularized contrastive EEG-Image pretraining.
Our method achieves state-of-the-art performance, with a top-1 accuracy of 19.3% and a top-5 accuracy of 48.8% in 200-way zero-shot image classification.
arXiv Detail & Related papers (2024-06-05T16:42:23Z) - Learning Robust Deep Visual Representations from EEG Brain Recordings [13.768240137063428]
This study proposes a two-stage method where the first step is to obtain EEG-derived features for robust learning of deep representations.
We demonstrate the generalizability of our feature extraction pipeline across three different datasets using deep-learning architectures.
We propose a novel framework to transform unseen images into the EEG space and reconstruct them with approximation.
arXiv Detail & Related papers (2023-10-25T10:26:07Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - A Unified Transformer-based Network for multimodal Emotion Recognition [4.07926531936425]
We present a transformer-based method to classify emotions in an arousal-valence space by combining a 2D representation of an ECG/ signal with the face information.
Our model produces comparable results to the state-of-the-art techniques.
arXiv Detail & Related papers (2023-08-27T17:30:56Z) - Decoding Natural Images from EEG for Object Recognition [8.411976038504589]
This paper presents a self-supervised framework to demonstrate the feasibility of learning image representations from EEG signals.
We achieve a top-1 accuracy of 15.6% and a top-5 accuracy of 42.8% in challenging 200-way zero-shot tasks.
These findings yield valuable insights for neural decoding and brain-computer interfaces in real-world scenarios.
arXiv Detail & Related papers (2023-08-25T08:05:37Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - A Compact and Interpretable Convolutional Neural Network for
Cross-Subject Driver Drowsiness Detection from Single-Channel EEG [4.963467827017178]
We propose a compact and interpretable Convolutional Neural Network (CNN) to discover shared EEG features across different subjects for driver drowsiness detection.
Results show that the proposed model can achieve an average accuracy of 73.22% on 11 subjects for 2-class cross-subject EEG signal classification.
arXiv Detail & Related papers (2021-05-30T14:36:34Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.