Modeling cognitive load as a self-supervised brain rate with
electroencephalography and deep learning
- URL: http://arxiv.org/abs/2209.10992v1
- Date: Wed, 21 Sep 2022 07:44:21 GMT
- Title: Modeling cognitive load as a self-supervised brain rate with
electroencephalography and deep learning
- Authors: Luca Longo
- Abstract summary: This research presents a novel self-supervised method for mental workload modelling from EEG data.
The method is a convolutional recurrent neural network trainable with spatially preserving spectral topographic head-maps from EEG data to fit the brain rate variable.
Findings point to the existence of quasi-stable blocks of learnt high-level representations of cognitive activation because they can be induced through convolution and seem not to be dependent on each other over time, intuitively matching the non-stationary nature of brain responses.
- Score: 2.741266294612776
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The principal reason for measuring mental workload is to quantify the
cognitive cost of performing tasks to predict human performance. Unfortunately,
a method for assessing mental workload that has general applicability does not
exist yet. This research presents a novel self-supervised method for mental
workload modelling from EEG data employing Deep Learning and a continuous brain
rate, an index of cognitive activation, without requiring human declarative
knowledge. This method is a convolutional recurrent neural network trainable
with spatially preserving spectral topographic head-maps from EEG data to fit
the brain rate variable. Findings demonstrate the capacity of the convolutional
layers to learn meaningful high-level representations from EEG data since
within-subject models had a test Mean Absolute Percentage Error average of 11%.
The addition of a Long-Short Term Memory layer for handling sequences of
high-level representations was not significant, although it did improve their
accuracy. Findings point to the existence of quasi-stable blocks of learnt
high-level representations of cognitive activation because they can be induced
through convolution and seem not to be dependent on each other over time,
intuitively matching the non-stationary nature of brain responses.
Across-subject models, induced with data from an increasing number of
participants, thus containing more variability, obtained a similar accuracy to
the within-subject models. This highlights the potential generalisability of
the induced high-level representations across people, suggesting the existence
of subject-independent cognitive activation patterns. This research contributes
to the body of knowledge by providing scholars with a novel computational
method for mental workload modelling that aims to be generally applicable, does
not rely on ad-hoc human-crafted models supporting replicability and
falsifiability.
Related papers
- BLEND: Behavior-guided Neural Population Dynamics Modeling via Privileged Knowledge Distillation [6.3559178227943764]
We propose BLEND, a behavior-guided neural population dynamics modeling framework via privileged knowledge distillation.
By considering behavior as privileged information, we train a teacher model that takes both behavior observations (privileged features) and neural activities (regular features) as inputs.
A student model is then distilled using only neural activity.
arXiv Detail & Related papers (2024-10-02T12:45:59Z) - Machine Learning on Dynamic Functional Connectivity: Promise, Pitfalls, and Interpretations [7.013079422694949]
We seek to establish a well-founded empirical guideline for designing deep models for functional neuroimages.
We put the spotlight on (1) What is the current state-of-the-arts (SOTA) performance in cognitive task recognition and disease diagnosis using fMRI?
We have conducted a comprehensive evaluation and statistical analysis, in various settings, to answer the above outstanding questions.
arXiv Detail & Related papers (2024-09-17T17:24:17Z) - Growing Deep Neural Network Considering with Similarity between Neurons [4.32776344138537]
We explore a novel approach of progressively increasing neuron numbers in compact models during training phases.
We propose a method that reduces feature extraction biases and neuronal redundancy by introducing constraints based on neuron similarity distributions.
Results on CIFAR-10 and CIFAR-100 datasets demonstrated accuracy improvement.
arXiv Detail & Related papers (2024-08-23T11:16:37Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer [19.329190789275565]
We introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons.
We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets.
arXiv Detail & Related papers (2022-06-09T18:54:23Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Learning identifiable and interpretable latent models of
high-dimensional neural activity using pi-VAE [10.529943544385585]
We propose a method that integrates key ingredients from latent models and traditional neural encoding models.
Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder.
We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex.
arXiv Detail & Related papers (2020-11-09T22:00:38Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.