Subject-Independent Drowsiness Recognition from Single-Channel EEG with
an Interpretable CNN-LSTM model
- URL: http://arxiv.org/abs/2112.10894v1
- Date: Sun, 21 Nov 2021 10:37:35 GMT
- Title: Subject-Independent Drowsiness Recognition from Single-Channel EEG with
an Interpretable CNN-LSTM model
- Authors: Jian Cui, Zirui Lan, Tianhu Zheng, Yisi Liu, Olga Sourina, Lipo Wang,
Wolfgang M\"uller-Wittig
- Abstract summary: We propose a novel Convolutional Neural Network (CNN)-Long Short-Term Memory (LSTM) model for subject-independent drowsiness recognition from single-channel EEG signals.
Results show that the model achieves an average accuracy of 72.97% on 11 subjects for leave-one-out subject-independent drowsiness recognition on a public dataset.
- Score: 0.8250892979520543
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: For EEG-based drowsiness recognition, it is desirable to use
subject-independent recognition since conducting calibration on each subject is
time-consuming. In this paper, we propose a novel Convolutional Neural Network
(CNN)-Long Short-Term Memory (LSTM) model for subject-independent drowsiness
recognition from single-channel EEG signals. Different from existing deep
learning models that are mostly treated as black-box classifiers, the proposed
model can explain its decisions for each input sample by revealing which parts
of the sample contain important features identified by the model for
classification. This is achieved by a visualization technique by taking
advantage of the hidden states output by the LSTM layer. Results show that the
model achieves an average accuracy of 72.97% on 11 subjects for leave-one-out
subject-independent drowsiness recognition on a public dataset, which is higher
than the conventional baseline methods of 55.42%-69.27%, and state-of-the-art
deep learning methods. Visualization results show that the model has discovered
meaningful patterns of EEG signals related to different mental states across
different subjects.
Related papers
- Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection [66.16595174895802]
Existing AI-generated image (AIGI) detection methods often suffer from limited generalization performance.
In this paper, we identify a crucial yet previously overlooked asymmetry phenomenon in AIGI detection.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - The Importance of Downstream Networks in Digital Pathology Foundation Models [1.689369173057502]
We evaluate seven feature extractor models across three different datasets with 162 different aggregation model configurations.
We find that the performance of many current feature extractor models is notably similar.
arXiv Detail & Related papers (2023-11-29T16:54:25Z) - ContraFeat: Contrasting Deep Features for Semantic Discovery [102.4163768995288]
StyleGAN has shown strong potential for disentangled semantic control.
Existing semantic discovery methods on StyleGAN rely on manual selection of modified latent layers to obtain satisfactory manipulation results.
We propose a model that automates this process and achieves state-of-the-art semantic discovery performance.
arXiv Detail & Related papers (2022-12-14T15:22:13Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Interpretable Convolutional Neural Networks for Subject-Independent
Motor Imagery Classification [22.488536453952964]
We propose an explainable deep learning model for brain computer interface (BCI) study.
Specifically, we aim to classify EEG signal which is obtained from the motor-imagery (MI) task.
We visualized the heatmap which indicates the output of the LRP in form of topography to certify neuro-physiological factors.
arXiv Detail & Related papers (2021-12-14T07:35:52Z) - DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG
Signals [62.997667081978825]
We develop a novel statistical point process model-called driven temporal point processes (DriPP)
We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.
Results on standard MEG datasets demonstrate that our methodology reveals event-related neural responses.
arXiv Detail & Related papers (2021-12-08T13:07:21Z) - Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks [2.76240219662896]
A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
arXiv Detail & Related papers (2021-06-07T09:41:15Z) - EEG-based Cross-Subject Driver Drowsiness Recognition with an
Interpretable Convolutional Neural Network [0.0]
We develop a novel convolutional neural network combined with an interpretation technique that allows sample-wise analysis of important features for classification.
Results show that the model achieves an average accuracy of 78.35% on 11 subjects for leave-one-out cross-subject recognition.
arXiv Detail & Related papers (2021-05-30T14:47:20Z) - TELESTO: A Graph Neural Network Model for Anomaly Classification in
Cloud Services [77.454688257702]
Machine learning (ML) and artificial intelligence (AI) are applied on IT system operation and maintenance.
One direction aims at the recognition of re-occurring anomaly types to enable remediation automation.
We propose a method that is invariant to dimensionality changes of given data.
arXiv Detail & Related papers (2021-02-25T14:24:49Z) - View-Invariant Gait Recognition with Attentive Recurrent Learning of
Partial Representations [27.33579145744285]
We propose a network that first learns to extract gait convolutional energy maps (GCEM) from frame-level convolutional features.
It then adopts a bidirectional neural network to learn from split bins of the GCEM, thus exploiting the relations between learned partial recurrent representations.
Our proposed model has been extensively tested on two large-scale CASIA-B and OU-M gait datasets.
arXiv Detail & Related papers (2020-10-18T20:20:43Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.