Interpretable Convolutional Neural Networks for Subject-Independent
Motor Imagery Classification
- URL: http://arxiv.org/abs/2112.07208v1
- Date: Tue, 14 Dec 2021 07:35:52 GMT
- Title: Interpretable Convolutional Neural Networks for Subject-Independent
Motor Imagery Classification
- Authors: Ji-Seon Bang, Seong-Whan Lee
- Abstract summary: We propose an explainable deep learning model for brain computer interface (BCI) study.
Specifically, we aim to classify EEG signal which is obtained from the motor-imagery (MI) task.
We visualized the heatmap which indicates the output of the LRP in form of topography to certify neuro-physiological factors.
- Score: 22.488536453952964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning frameworks have become increasingly popular in brain computer
interface (BCI) study thanks to their outstanding performance. However, in
terms of the classification model alone, they are treated as black box as they
do not provide any information on what led them to reach a particular decision.
In other words, we cannot convince whether the high performance was aroused by
the neuro-physiological factors or simply noise. Because of this disadvantage,
it is difficult to ensure adequate reliability compared to their high
performance. In this study, we propose an explainable deep learning model for
BCI. Specifically, we aim to classify EEG signal which is obtained from the
motor-imagery (MI) task. In addition, we adopted layer-wise relevance
propagation (LRP) to the model to interpret the reason that the model derived
certain classification output. We visualized the heatmap which indicates the
output of the LRP in form of topography to certify neuro-physiological factors.
Furthermore, we classified EEG with the subject-independent manner to learn
robust and generalized EEG features by avoiding subject dependency. The
methodology also provides the advantage of avoiding the expense of building
training data for each subject. With our proposed model, we obtained
generalized heatmap patterns for all subjects. As a result, we can conclude
that our proposed model provides neuro-physiologically reliable interpretation.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - Subject-Independent Drowsiness Recognition from Single-Channel EEG with
an Interpretable CNN-LSTM model [0.8250892979520543]
We propose a novel Convolutional Neural Network (CNN)-Long Short-Term Memory (LSTM) model for subject-independent drowsiness recognition from single-channel EEG signals.
Results show that the model achieves an average accuracy of 72.97% on 11 subjects for leave-one-out subject-independent drowsiness recognition on a public dataset.
arXiv Detail & Related papers (2021-11-21T10:37:35Z) - Improving a neural network model by explanation-guided training for
glioma classification based on MRI data [0.0]
Interpretability methods have become a popular way to gain insight into the decision-making process of deep learning models.
We propose a method for explanation-guided training that uses a Layer-wise relevance propagation (LRP) technique.
We experimentally verified our method on a convolutional neural network (CNN) model for low-grade and high-grade glioma classification problems.
arXiv Detail & Related papers (2021-07-05T13:27:28Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - A Compact and Interpretable Convolutional Neural Network for
Cross-Subject Driver Drowsiness Detection from Single-Channel EEG [4.963467827017178]
We propose a compact and interpretable Convolutional Neural Network (CNN) to discover shared EEG features across different subjects for driver drowsiness detection.
Results show that the proposed model can achieve an average accuracy of 73.22% on 11 subjects for 2-class cross-subject EEG signal classification.
arXiv Detail & Related papers (2021-05-30T14:36:34Z) - Interpretable Factorization for Neural Network ECG Models [10.223907995092835]
We show how to factor a Deep Neural Network into a hierarchical equation consisting of black box variables.
We demonstrate this choice yields interpretable component models identified with visual composite sketches of ECG samples.
arXiv Detail & Related papers (2020-06-26T19:32:05Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.