EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters
- URL: http://arxiv.org/abs/2110.10009v1
- Date: Tue, 19 Oct 2021 14:22:04 GMT
- Title: EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters
- Authors: Siegfried Ludwig, Stylianos Bakas, Dimitrios A. Adamos, Nikolaos
Laskaris, Yannis Panagakis, Stefanos Zafeiriou
- Abstract summary: We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
- Score: 72.19032452642728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Patterns of brain activity are associated with different brain processes and
can be used to identify different brain states and make behavioral predictions.
However, the relevant features are not readily apparent and accessible. To mine
informative latent representations from multichannel EEG recordings, we propose
a novel differentiable EEG decoding pipeline consisting of learnable filters
and a pre-determined feature extraction module. Specifically, we introduce
filters parameterized by generalized Gaussian functions that offer a smooth
derivative for stable end-to-end model training and allow for learning
interpretable features. For the feature module, we use signal magnitude and
functional connectivity. We demonstrate the utility of our model towards
emotion recognition from EEG signals on the SEED dataset, as well as on a new
EEG dataset of unprecedented size (i.e., 763 subjects), where we identify
consistent trends of music perception and related individual differences. The
discovered features align with previous neuroscience studies and offer new
insights, such as marked differences in the functional connectivity profile
between left and right temporal areas during music listening. This agrees with
the respective specialisation of the temporal lobes regarding music perception
proposed in the literature.
Related papers
- Feature Estimation of Global Language Processing in EEG Using Attention Maps [5.173821279121835]
This study introduces a novel approach to EEG feature estimation that utilizes the weights of deep learning models to explore this association.
We demonstrate that attention maps generated from Vision Transformers and EEGNet effectively identify features that align with findings from prior studies.
The application of Mel-Spectrogram with ViTs enhances the resolution of temporal and frequency-related EEG characteristics.
arXiv Detail & Related papers (2024-09-27T22:52:31Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - An Extended Variational Mode Decomposition Algorithm Developed Speech
Emotion Recognition Performance [15.919990281329085]
This study proposes VGG-optiVMD, an empowered variational mode decomposition algorithm, to distinguish meaningful speech features.
Various feature vectors were employed to train the VGG16 network on different databases and assess VGG-optiVMD and reliability.
Results confirmed a synergistic relationship between the fine-tuning of the signal sample rate and decomposition parameters with classification accuracy.
arXiv Detail & Related papers (2023-12-18T05:24:03Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Enhancing Affective Representations of Music-Induced EEG through
Multimodal Supervision and latent Domain Adaptation [34.726185927120355]
We employ music signals as a supervisory modality to EEG, aiming to project their semantic correspondence onto a common representation space.
We utilize a bi-modal framework by combining an LSTM-based attention model to process EEG and a pre-trained model for music tagging, along with a reverse domain discriminator to align the distributions of the two modalities.
The resulting framework can be utilized for emotion recognition both directly, by performing supervised predictions from either modality, and indirectly, by providing relevant music samples to EEG input queries.
arXiv Detail & Related papers (2022-02-20T07:32:12Z) - Learning shared neural manifolds from multi-subject FMRI data [13.093635609349874]
We propose a neural network called MRMD-AEmani that learns a common embedding from multiple subjects in an experiment.
We show that our learned common space represents antemporal manifold (where new points not seen during training can be mapped), improves the classification of stimulus features of unseen timepoints.
We believe this framework can be used for many downstream applications such as guided brain-computer interface (BCI) training in the future.
arXiv Detail & Related papers (2021-12-22T23:08:39Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - An Explainable Model for EEG Seizure Detection based on Connectivity
Features [0.0]
We propose to learn a deep neural network that detects whether a particular data window belongs to a seizure or not.
Taking our data as a sequence of ten sub-windows, we aim at designing an optimal deep learning model using attention, CNN, BiLstm, and fully connected layers.
Our best model architecture resulted in 97.03% accuracy using balanced MITBIH data subset.
arXiv Detail & Related papers (2020-09-26T11:07:30Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.