Using i-vectors for subject-independent cross-session EEG transfer
learning
- URL: http://arxiv.org/abs/2401.08851v1
- Date: Tue, 16 Jan 2024 21:56:27 GMT
- Title: Using i-vectors for subject-independent cross-session EEG transfer
learning
- Authors: Jonathan Lasko, Jeff Ma, Mike Nicoletti, Jonathan Sussman-Fort,
Sooyoung Jeong, William Hartmann
- Abstract summary: Cognitive load classification is the task of automatically determining an individual's utilization of working memory resources during performance of a task based on physiologic measures such as electroencephalography (EEG)
In this paper, we follow a cross-disciplinary approach, where tools and methodologies from speech processing are used to tackle this problem.
The corpus we use was released publicly in 2021 as part of the first passive brain-computer interface competition on cross-session workload estimation.
- Score: 2.384393656668996
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Cognitive load classification is the task of automatically determining an
individual's utilization of working memory resources during performance of a
task based on physiologic measures such as electroencephalography (EEG). In
this paper, we follow a cross-disciplinary approach, where tools and
methodologies from speech processing are used to tackle this problem. The
corpus we use was released publicly in 2021 as part of the first passive
brain-computer interface competition on cross-session workload estimation. We
present our approach which used i-vector-based neural network classifiers to
accomplish inter-subject cross-session EEG transfer learning, achieving 18%
relative improvement over equivalent subject-dependent models. We also report
experiments showing how our subject-independent models perform competitively on
held-out subjects and improve with additional subject data, suggesting that
subject-dependent training is not required for effective cognitive load
determination.
Related papers
- Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - Wearable Device-Based Real-Time Monitoring of Physiological Signals: Evaluating Cognitive Load Across Different Tasks [6.673424334358673]
This study employs cutting-edge wearable monitoring technology to conduct cognitive load assessment on electroencephalogram (EEG) data of secondary vocational students.
The research delves into their application value in assessing cognitive load among secondary vocational students and their utility across various tasks.
arXiv Detail & Related papers (2024-06-11T10:48:26Z) - DUCK: Distance-based Unlearning via Centroid Kinematics [40.2428948628001]
This work introduces a novel unlearning algorithm, denoted as Distance-based Unlearning via Centroid Kinematics (DUCK)
evaluation of the algorithm's performance is conducted across various benchmark datasets.
We also introduce a novel metric, called Adaptive Unlearning Score (AUS), encompassing not only the efficacy of the unlearning process in forgetting target data but also quantifying the performance loss relative to the original model.
arXiv Detail & Related papers (2023-12-04T17:10:25Z) - EEG-based Cognitive Load Classification using Feature Masked
Autoencoding and Emotion Transfer Learning [13.404503606887715]
We present a new solution for the classification of cognitive load using electroencephalogram (EEG)
We pre-train our model using self-supervised masked autoencoding on emotion-related EEG datasets.
The results of our experiments show that our proposed approach achieves strong results and outperforms conventional single-stage fully supervised learning.
arXiv Detail & Related papers (2023-08-01T02:59:19Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - Team Cogitat at NeurIPS 2021: Benchmarks for EEG Transfer Learning
Competition [55.34407717373643]
Building subject-independent deep learning models for EEG decoding faces the challenge of strong co-shift.
Our approach is to explicitly align feature distributions at various layers of the deep learning model.
The methodology won first place in the 2021 Benchmarks in EEG Transfer Learning competition, hosted at the NeurIPS conference.
arXiv Detail & Related papers (2022-02-01T11:11:08Z) - Common Spatial Generative Adversarial Networks based EEG Data
Augmentation for Cross-Subject Brain-Computer Interface [4.8276709243429]
Cross-subject application of EEG-based brain-computer interface (BCI) has always been limited by large individual difference and complex characteristics that are difficult to perceive.
We propose a cross-subject EEG classification framework with a generative adversarial networks (GANs) based method named common spatial GAN (CS-GAN)
Our framework provides a promising way to deal with the cross-subject problem and promote the practical application of BCI.
arXiv Detail & Related papers (2021-02-08T10:37:03Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Machine Learning for Motor Learning: EEG-based Continuous Assessment of
Cognitive Engagement for Adaptive Rehabilitation Robots [0.0]
cognitive engagement (CE) is crucial for motor learning, but it remains underutilized in rehabilitation robots.
We propose an end-to-end computational framework that assesses CE in real-time, using electroencephalography (EEG) as objective measurements.
arXiv Detail & Related papers (2020-02-18T13:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.