Comparison of Attention-based Deep Learning Models for EEG
Classification
- URL: http://arxiv.org/abs/2012.01074v1
- Date: Wed, 2 Dec 2020 10:43:41 GMT
- Title: Comparison of Attention-based Deep Learning Models for EEG
Classification
- Authors: Giulia Cisotto, Alessio Zanga, Joanna Chlebus, Italo Zoppis, Sara
Manzoni, and Urszula Markowska-Kaczmar
- Abstract summary: We evaluate the impact on Electroencephalography (EEG) classification of different kinds of attention mechanisms in Deep Learning (DL) models.
We used these models to classify normal and abnormal (i.e., artifactual or pathological) EEG patterns.
- Score: 0.2770822269241974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: To evaluate the impact on Electroencephalography (EEG)
classification of different kinds of attention mechanisms in Deep Learning (DL)
models. Methods: We compared three attention-enhanced DL models, the brand-new
InstaGATs, an LSTM with attention and a CNN with attention. We used these
models to classify normal and abnormal (i.e., artifactual or pathological) EEG
patterns. Results: We achieved the state of the art in all classification
problems, regardless the large variability of the datasets and the simple
architecture of the attention-enhanced models. We could also prove that,
depending on how the attention mechanism is applied and where the attention
layer is located in the model, we can alternatively leverage the information
contained in the time, frequency or space domain of the dataset. Conclusions:
with this work, we shed light over the role of different attention mechanisms
in the classification of normal and abnormal EEG patterns. Moreover, we
discussed how they can exploit the intrinsic relationships in the temporal,
frequency and spatial domains of our brain activity. Significance: Attention
represents a promising strategy to evaluate the quality of the EEG information,
and its relevance, in different real-world scenarios. Moreover, it can make it
easier to parallelize the computation and, thus, to speed up the analysis of
big electrophysiological (e.g., EEG) datasets.
Related papers
- Feature Estimation of Global Language Processing in EEG Using Attention Maps [5.173821279121835]
This study introduces a novel approach to EEG feature estimation that utilizes the weights of deep learning models to explore this association.
We demonstrate that attention maps generated from Vision Transformers and EEGNet effectively identify features that align with findings from prior studies.
The application of Mel-Spectrogram with ViTs enhances the resolution of temporal and frequency-related EEG characteristics.
arXiv Detail & Related papers (2024-09-27T22:52:31Z) - FoME: A Foundation Model for EEG using Adaptive Temporal-Lateral Attention Scaling [19.85701025524892]
FoME (Foundation Model for EEG) is a novel approach using adaptive temporal-lateral attention scaling.
FoME is pre-trained on a diverse 1.7TB dataset of scalp and intracranial EEG recordings, comprising 745M parameters trained for 1,096k steps.
arXiv Detail & Related papers (2024-09-19T04:22:40Z) - Physics Inspired Hybrid Attention for SAR Target Recognition [61.01086031364307]
We propose a physics inspired hybrid attention (PIHA) mechanism and the once-for-all (OFA) evaluation protocol to address the issues.
PIHA leverages the high-level semantics of physical information to activate and guide the feature group aware of local semantics of target.
Our method outperforms other state-of-the-art approaches in 12 test scenarios with same ASC parameters.
arXiv Detail & Related papers (2023-09-27T14:39:41Z) - Locally temporal-spatial pattern learning with graph attention mechanism
for EEG-based emotion recognition [4.331986787747648]
Technique of emotion recognition enables computers to classify human affective states into discrete categories.
The emotion may fluctuate instead of maintaining a stable state even within a short time interval.
There is also a difficulty to take the full use of the EEG spatial distribution due to its 3-D topology structure.
arXiv Detail & Related papers (2022-08-19T12:15:10Z) - GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for
Robust Electrocardiogram Prediction [20.8603653664403]
We propose a physiologically-inspired data augmentation method to improve performance and increase the robustness of heart disease detection based on ECG signals.
We obtain augmented samples by perturbing the data distribution towards other classes along the geodesic in Wasserstein space.
Learning from 12-lead ECG signals, our model is able to distinguish five categories of cardiac conditions.
arXiv Detail & Related papers (2022-08-02T03:14:13Z) - A Deep Learning Approach for the Segmentation of Electroencephalography
Data in Eye Tracking Applications [56.458448869572294]
We introduce DETRtime, a novel framework for time-series segmentation of EEG data.
Our end-to-end deep learning-based framework brings advances in Computer Vision to the forefront.
Our model generalizes well in the task of EEG sleep stage segmentation.
arXiv Detail & Related papers (2022-06-17T10:17:24Z) - Interpretable Convolutional Neural Networks for Subject-Independent
Motor Imagery Classification [22.488536453952964]
We propose an explainable deep learning model for brain computer interface (BCI) study.
Specifically, we aim to classify EEG signal which is obtained from the motor-imagery (MI) task.
We visualized the heatmap which indicates the output of the LRP in form of topography to certify neuro-physiological factors.
arXiv Detail & Related papers (2021-12-14T07:35:52Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - How Knowledge Graph and Attention Help? A Quantitative Analysis into
Bag-level Relation Extraction [66.09605613944201]
We quantitatively evaluate the effect of attention and Knowledge Graph on bag-level relation extraction (RE)
We find that (1) higher attention accuracy may lead to worse performance as it may harm the model's ability to extract entity mention features; (2) the performance of attention is largely influenced by various noise distribution patterns; and (3) KG-enhanced attention indeed improves RE performance, while not through enhanced attention but by incorporating entity prior.
arXiv Detail & Related papers (2021-07-26T09:38:28Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.