Towards Best Practice of Interpreting Deep Learning Models for EEG-based
Brain Computer Interfaces
- URL: http://arxiv.org/abs/2202.06948v3
- Date: Tue, 18 Apr 2023 03:29:58 GMT
- Title: Towards Best Practice of Interpreting Deep Learning Models for EEG-based
Brain Computer Interfaces
- Authors: Jian Cui, Liqiang Yuan, Zhaoxiang Wang, Ruilin Li, Tianzi Jiang
- Abstract summary: We evaluate different deep interpretation techniques on EEG datasets.
The results reveal the importance of selecting a proper interpretation technique as the initial step.
We propose a set of procedures that allow the interpretation results to be presented in an understandable and trusted way.
- Score: 1.5670669686642233
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As deep learning has achieved state-of-the-art performance for many tasks of
EEG-based BCI, many efforts have been made in recent years trying to understand
what have been learned by the models. This is commonly done by generating a
heatmap indicating to which extent each pixel of the input contributes to the
final classification for a trained model. Despite the wide use, it is not yet
understood to which extent the obtained interpretation results can be trusted
and how accurate they can reflect the model decisions. In order to fill this
research gap, we conduct a study to evaluate different deep interpretation
techniques quantitatively on EEG datasets. The results reveal the importance of
selecting a proper interpretation technique as the initial step. In addition,
we also find that the quality of the interpretation results is inconsistent for
individual samples despite when a method with an overall good performance is
used. Many factors, including model structure and dataset types, could
potentially affect the quality of the interpretation results. Based on the
observations, we propose a set of procedures that allow the interpretation
results to be presented in an understandable and trusted way. We illustrate the
usefulness of our method for EEG-based BCI with instances selected from
different scenarios.
Related papers
- Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Revisiting Demonstration Selection Strategies in In-Context Learning [66.11652803887284]
Large language models (LLMs) have shown an impressive ability to perform a wide range of tasks using in-context learning (ICL)
In this work, we first revisit the factors contributing to this variance from both data and model aspects, and find that the choice of demonstration is both data- and model-dependent.
We propose a data- and model-dependent demonstration selection method, textbfTopK + ConE, based on the assumption that textitthe performance of a demonstration positively correlates with its contribution to the model's understanding of the test samples.
arXiv Detail & Related papers (2024-01-22T16:25:27Z) - EEGFormer: Towards Transferable and Interpretable Large-Scale EEG
Foundation Model [39.363511340878624]
We present a novel EEG foundation model, namely EEGFormer, pretrained on large-scale compound EEG data.
To validate the effectiveness of our model, we extensively evaluate it on various downstream tasks and assess the performance under different transfer settings.
arXiv Detail & Related papers (2024-01-11T17:36:24Z) - I-CEE: Tailoring Explanations of Image Classification Models to User
Expertise [13.293968260458962]
We present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise.
I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users.
Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions.
arXiv Detail & Related papers (2023-12-19T12:26:57Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Interpretable Convolutional Neural Networks for Subject-Independent
Motor Imagery Classification [22.488536453952964]
We propose an explainable deep learning model for brain computer interface (BCI) study.
Specifically, we aim to classify EEG signal which is obtained from the motor-imagery (MI) task.
We visualized the heatmap which indicates the output of the LRP in form of topography to certify neuro-physiological factors.
arXiv Detail & Related papers (2021-12-14T07:35:52Z) - Evaluating Explainable Artificial Intelligence Methods for Multi-label
Deep Learning Classification Tasks in Remote Sensing [0.0]
We develop deep learning models with state-of-the-art performance in benchmark datasets.
Ten XAI methods were employed towards understanding and interpreting models' predictions.
Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods.
arXiv Detail & Related papers (2021-04-03T11:13:14Z) - Interpretable Deep Learning: Interpretations, Interpretability,
Trustworthiness, and Beyond [49.93153180169685]
We introduce and clarify two basic concepts-interpretations and interpretability-that people usually get confused.
We elaborate the design of several recent interpretation algorithms, from different perspectives, through proposing a new taxonomy.
We summarize the existing work in evaluating models' interpretability using "trustworthy" interpretation algorithms.
arXiv Detail & Related papers (2021-03-19T08:40:30Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.