DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection
- URL: http://arxiv.org/abs/2309.07147v1
- Date: Thu, 7 Sep 2023 13:43:46 GMT
- Title: DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection
- Authors: Cunhang Fan, Hongyu Zhang, Wei Huang, Jun Xue, Jianhua Tao, Jiangyan
Yi, Zhao Lv and Xiaopei Wu
- Abstract summary: Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
- Score: 49.196182908826565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Auditory Attention Detection (AAD) aims to detect target speaker from brain
signals in a multi-speaker environment. Although EEG-based AAD methods have
shown promising results in recent years, current approaches primarily rely on
traditional convolutional neural network designed for processing Euclidean data
like images. This makes it challenging to handle EEG signals, which possess
non-Euclidean characteristics. In order to address this problem, this paper
proposes a dynamical graph self-distillation (DGSD) approach for AAD, which
does not require speech stimuli as input. Specifically, to effectively
represent the non-Euclidean properties of EEG signals, dynamical graph
convolutional networks are applied to represent the graph structure of EEG
signals, which can also extract crucial features related to auditory spatial
attention in EEG signals. In addition, to further improve AAD detection
performance, self-distillation, consisting of feature distillation and
hierarchical distillation strategies at each layer, is integrated. These
strategies leverage features and classification results from the deepest
network layers to guide the learning of shallow layers. Our experiments are
conducted on two publicly available datasets, KUL and DTU. Under a 1-second
time window, we achieve results of 90.0\% and 79.6\% accuracy on KUL and DTU,
respectively. We compare our DGSD method with competitive baselines, and the
experimental results indicate that the detection performance of our proposed
DGSD method is not only superior to the best reproducible baseline but also
significantly reduces the number of trainable parameters by approximately 100
times.
Related papers
- LEAD: Large Foundation Model for EEG-Based Alzheimer's Disease Detection [4.935843202928883]
We propose LEAD, the first large foundation model for EEG-based Alzheimer's Disease detection.
We pre-train the model on 11 EEG datasets and unified fine-tune it on 5 AD datasets.
Our method demonstrates outstanding AD detection performance, achieving up to a 9.86% increase in F1 score at the sample-level and up to a 9.31% at the subject-level.
arXiv Detail & Related papers (2025-02-02T04:19:35Z) - CEReBrO: Compact Encoder for Representations of Brain Oscillations Using Efficient Alternating Attention [53.539020807256904]
We introduce a Compact for Representations of Brain Oscillations using alternating attention (CEReBrO)
Our tokenization scheme represents EEG signals at a per-channel patch.
We propose an alternating attention mechanism that jointly models intra-channel temporal dynamics and inter-channel spatial correlations, achieving 2x speed improvement with 6x less memory required compared to standard self-attention.
arXiv Detail & Related papers (2025-01-18T21:44:38Z) - CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information [61.1904164368732]
We propose CognitionCapturer, a unified framework that fully leverages multimodal data to represent EEG signals.
Specifically, CognitionCapturer trains Modality Experts for each modality to extract cross-modal information from the EEG modality.
The framework does not require any fine-tuning of the generative models and can be extended to incorporate more modalities.
arXiv Detail & Related papers (2024-12-13T16:27:54Z) - Graph Convolutional Network with Connectivity Uncertainty for EEG-based
Emotion Recognition [20.655367200006076]
This study introduces the distribution-based uncertainty method to represent spatial dependencies and temporal-spectral relativeness in EEG signals.
The graph mixup technique is employed to enhance latent connected edges and mitigate noisy label issues.
We evaluate our approach on two widely used datasets, namely SEED and SEEDIV, for emotion recognition tasks.
arXiv Detail & Related papers (2023-10-22T03:47:11Z) - EOG Artifact Removal from Single and Multi-channel EEG Recordings
through the combination of Long Short-Term Memory Networks and Independent
Component Analysis [0.0]
We present a novel methodology that combines a long short-term memory (LSTM)-based neural network with ICA to address the challenge of EOG artifact removal from EEG signals.
Our approach aims to accomplish two primary objectives: 1) estimate the horizontal and vertical EOG signals from the contaminated EEG data, and 2) employ ICA to eliminate the estimated EOG signals from the EEG.
arXiv Detail & Related papers (2023-08-25T13:32:28Z) - Subject Independent Emotion Recognition using EEG Signals Employing
Attention Driven Neural Networks [2.76240219662896]
A novel deep learning framework capable of doing subject-independent emotion recognition is presented.
A convolutional neural network (CNN) with attention framework is presented for performing the task.
The proposed approach has been validated using publicly available datasets.
arXiv Detail & Related papers (2021-06-07T09:41:15Z) - ScalingNet: extracting features from raw EEG data for emotion
recognition [4.047737925426405]
We propose a novel convolutional layer allowing to adaptively extract effective data-driven spectrogram-like features from raw EEG signals.
The proposed neural network architecture based on the scaling layer, references as ScalingNet, has achieved the state-of-the-art result across the established DEAP benchmark dataset.
arXiv Detail & Related papers (2021-02-07T08:54:27Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - ECG-DelNet: Delineation of Ambulatory Electrocardiograms with Mixed
Quality Labeling Using Neural Networks [69.25956542388653]
Deep learning (DL) algorithms are gaining weight in academic and industrial settings.
We demonstrate DL can be successfully applied to low interpretative tasks by embedding ECG detection and delineation onto a segmentation framework.
The model was trained using PhysioNet's QT database, comprised of 105 ambulatory ECG recordings.
arXiv Detail & Related papers (2020-05-11T16:29:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.