Modeling cognitive load as a self-supervised brain rate with
electroencephalography and deep learning
- URL: http://arxiv.org/abs/2209.10992v1
- Date: Wed, 21 Sep 2022 07:44:21 GMT
- Title: Modeling cognitive load as a self-supervised brain rate with
electroencephalography and deep learning
- Authors: Luca Longo
- Abstract summary: This research presents a novel self-supervised method for mental workload modelling from EEG data.
The method is a convolutional recurrent neural network trainable with spatially preserving spectral topographic head-maps from EEG data to fit the brain rate variable.
Findings point to the existence of quasi-stable blocks of learnt high-level representations of cognitive activation because they can be induced through convolution and seem not to be dependent on each other over time, intuitively matching the non-stationary nature of brain responses.
- Score: 2.741266294612776
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The principal reason for measuring mental workload is to quantify the
cognitive cost of performing tasks to predict human performance. Unfortunately,
a method for assessing mental workload that has general applicability does not
exist yet. This research presents a novel self-supervised method for mental
workload modelling from EEG data employing Deep Learning and a continuous brain
rate, an index of cognitive activation, without requiring human declarative
knowledge. This method is a convolutional recurrent neural network trainable
with spatially preserving spectral topographic head-maps from EEG data to fit
the brain rate variable. Findings demonstrate the capacity of the convolutional
layers to learn meaningful high-level representations from EEG data since
within-subject models had a test Mean Absolute Percentage Error average of 11%.
The addition of a Long-Short Term Memory layer for handling sequences of
high-level representations was not significant, although it did improve their
accuracy. Findings point to the existence of quasi-stable blocks of learnt
high-level representations of cognitive activation because they can be induced
through convolution and seem not to be dependent on each other over time,
intuitively matching the non-stationary nature of brain responses.
Across-subject models, induced with data from an increasing number of
participants, thus containing more variability, obtained a similar accuracy to
the within-subject models. This highlights the potential generalisability of
the induced high-level representations across people, suggesting the existence
of subject-independent cognitive activation patterns. This research contributes
to the body of knowledge by providing scholars with a novel computational
method for mental workload modelling that aims to be generally applicable, does
not rely on ad-hoc human-crafted models supporting replicability and
falsifiability.
Related papers
- DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial Dynamics in Brain Networks [4.041732967881764]
Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest.
These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand.
We propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series.
arXiv Detail & Related papers (2024-05-19T23:35:06Z) - MBrain: A Multi-channel Self-Supervised Learning Framework for Brain
Signals [7.682832730967219]
We study the self-supervised learning framework for brain signals that can be applied to pre-train either SEEG or EEG data.
Inspired by this, we propose MBrain to learn implicit spatial and temporal correlations between different channels.
Our model outperforms several state-of-the-art time series SSL and unsupervised models, and has the ability to be deployed to clinical practice.
arXiv Detail & Related papers (2023-06-15T09:14:26Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer [19.329190789275565]
We introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons.
We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets.
arXiv Detail & Related papers (2022-06-09T18:54:23Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Evaluating deep transfer learning for whole-brain cognitive decoding [11.898286908882561]
Transfer learning (TL) is well-suited to improve the performance of deep learning (DL) models in datasets with small numbers of samples.
Here, we evaluate TL for the application of DL models to the decoding of cognitive states from whole-brain functional Magnetic Resonance Imaging (fMRI) data.
arXiv Detail & Related papers (2021-11-01T15:44:49Z) - Learning identifiable and interpretable latent models of
high-dimensional neural activity using pi-VAE [10.529943544385585]
We propose a method that integrates key ingredients from latent models and traditional neural encoding models.
Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder.
We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex.
arXiv Detail & Related papers (2020-11-09T22:00:38Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.