Can EEG resting state data benefit data-driven approaches for motor-imagery decoding?
- URL: http://arxiv.org/abs/2411.09789v1
- Date: Mon, 28 Oct 2024 07:18:32 GMT
- Title: Can EEG resting state data benefit data-driven approaches for motor-imagery decoding?
- Authors: Rishan Mehta, Param Rajpura, Yogesh Kumar Meena,
- Abstract summary: We propose a feature concatenation approach to enhance decoding models' generalization.
We combine the EEGNet model, a standard convolutional neural network for EEG signal classification, with functional connectivity measures derived from resting-state EEG data.
While an improvement in mean accuracy for within-user scenarios is observed, concatenation doesn't benefit across-user scenarios when compared with random data concatenation.
- Score: 4.870701423888026
- License:
- Abstract: Resting-state EEG data in neuroscience research serve as reliable markers for user identification and reveal individual-specific traits. Despite this, the use of resting-state data in EEG classification models is limited. In this work, we propose a feature concatenation approach to enhance decoding models' generalization by integrating resting-state EEG, aiming to improve motor imagery BCI performance and develop a user-generalized model. Using feature concatenation, we combine the EEGNet model, a standard convolutional neural network for EEG signal classification, with functional connectivity measures derived from resting-state EEG data. The findings suggest that although grounded in neuroscience with data-driven learning, the concatenation approach has limited benefits for generalizing models in within-user and across-user scenarios. While an improvement in mean accuracy for within-user scenarios is observed on two datasets, concatenation doesn't benefit across-user scenarios when compared with random data concatenation. The findings indicate the necessity of further investigation on the model interpretability and the effect of random data concatenation on model robustness.
Related papers
- Graph Adapter of EEG Foundation Models for Parameter Efficient Fine Tuning [1.8946099300030472]
EEG-GraphAdapter (EGA) is a parameter-efficient fine-tuning (PEFT) approach to address these challenges.
EGA is integrated into pre-trained temporal backbone models as a GNN-based module.
It improves performance by up to 16.1% in the F1-score compared with the backbone BENDR model.
arXiv Detail & Related papers (2024-11-25T07:30:52Z) - Synthesizing Multimodal Electronic Health Records via Predictive Diffusion Models [69.06149482021071]
We propose a novel EHR data generation model called EHRPD.
It is a diffusion-based model designed to predict the next visit based on the current one while also incorporating time interval estimation.
We conduct experiments on two public datasets and evaluate EHRPD from fidelity, privacy, and utility perspectives.
arXiv Detail & Related papers (2024-06-20T02:20:23Z) - Synthesizing EEG Signals from Event-Related Potential Paradigms with Conditional Diffusion Models [3.187381965457262]
We introduce a novel approach to conditional diffusion models that directly generate subject-, session-, and class-specific EEG data.
The results indicate that the proposed model can generate EEG data that resembles real data for each subject, session, and class.
arXiv Detail & Related papers (2024-03-27T11:58:45Z) - Zero-shot Composed Text-Image Retrieval [72.43790281036584]
We consider the problem of composed image retrieval (CIR)
It aims to train a model that can fuse multi-modal information, e.g., text and images, to accurately retrieve images that match the query, extending the user's expression ability.
arXiv Detail & Related papers (2023-06-12T17:56:01Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - A Federated Learning-based Industrial Health Prognostics for
Heterogeneous Edge Devices using Matched Feature Extraction [16.337207503536384]
We propose a pioneering FL-based health prognostic model with a feature similarity-matched parameter aggregation algorithm.
We show that the proposed method yields accuracy improvements as high as 44.5% and 39.3% for state-of-health estimation and remaining useful life estimation.
arXiv Detail & Related papers (2023-05-13T07:20:31Z) - Vector-Based Data Improves Left-Right Eye-Tracking Classifier
Performance After a Covariate Distributional Shift [0.0]
We propose a fine-grain data approach for EEG-ET data collection in order to create more robust benchmarking.
We train machine learning models utilizing both coarse-grain and fine-grain data and compare their accuracies when tested on data of similar/different distributional patterns.
Results showed that models trained on fine-grain, vector-based data were less susceptible to distributional shifts than models trained on coarse-grain, binary-classified data.
arXiv Detail & Related papers (2022-07-31T16:27:50Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - ARM-Net: Adaptive Relation Modeling Network for Structured Data [29.94433633729326]
ARM-Net is an adaptive relation modeling network tailored for structured data and a lightweight framework ARMOR based on ARM-Net for relational data.
We show that ARM-Net consistently outperforms existing models and provides more interpretable predictions for datasets.
arXiv Detail & Related papers (2021-07-05T07:37:24Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.