Sample Dominance Aware Framework via Non-Parametric Estimation for
Spontaneous Brain-Computer Interface
- URL: http://arxiv.org/abs/2311.07079v2
- Date: Wed, 15 Nov 2023 02:51:08 GMT
- Title: Sample Dominance Aware Framework via Non-Parametric Estimation for
Spontaneous Brain-Computer Interface
- Authors: Byeong-Hoo Lee, Byoung-Hee Kwon, and Seong-Whan Lee
- Abstract summary: Inconsistent EEG signals resulting from non-stationary characteristics can lead to poor performance.
In this study, we introduce the concept of sample dominance as a measure of EEG signal inconsistency.
We present a two-stage dominance score estimation technique that compensates for performance caused by sample inconsistencies.
- Score: 27.077560296908423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has shown promise in decoding brain signals, such as
electroencephalogram (EEG), in the field of brain-computer interfaces (BCIs).
However, the non-stationary characteristics of EEG signals pose challenges for
training neural networks to acquire appropriate knowledge. Inconsistent EEG
signals resulting from these non-stationary characteristics can lead to poor
performance. Therefore, it is crucial to investigate and address sample
inconsistency to ensure robust performance in spontaneous BCIs. In this study,
we introduce the concept of sample dominance as a measure of EEG signal
inconsistency and propose a method to modulate its effect on network training.
We present a two-stage dominance score estimation technique that compensates
for performance degradation caused by sample inconsistencies. Our proposed
method utilizes non-parametric estimation to infer sample inconsistency and
assigns each sample a dominance score. This score is then aggregated with the
loss function during training to modulate the impact of sample inconsistency.
Furthermore, we design a curriculum learning approach that gradually increases
the influence of inconsistent signals during training to improve overall
performance. We evaluate our proposed method using public spontaneous BCI
dataset. The experimental results confirm that our findings highlight the
importance of addressing sample dominance for achieving robust performance in
spontaneous BCIs.
Related papers
- PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE is a self-supervised learning framework that enhances global feature representation of point cloud mask autoencoders.
We show that PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Hybrid Classification-Regression Adaptive Loss for Dense Object Detection [19.180514552400883]
We propose a Hybrid Classification-Regression Adaptive Loss, termed as HCRAL.
We introduce the Residual of Classification and IoU (RCI) module for cross-task supervision, addressing task inconsistencies, and the Conditioning Factor (CF) to focus on difficult-to-train samples within each task.
We also introduce a new strategy named Expanded Adaptive Training Sample Selection (EATSS) to provide additional samples that exhibit classification and regression inconsistencies.
arXiv Detail & Related papers (2024-08-30T10:31:39Z) - Latent Alignment with Deep Set EEG Decoders [44.128689862889715]
We introduce the Latent Alignment method that won the Benchmarks for EEG Transfer Learning competition.
We present its formulation as a deep set applied on the set of trials from a given subject.
Our experimental results show that performing statistical distribution alignment at later stages in a deep learning model is beneficial to the classification accuracy.
arXiv Detail & Related papers (2023-11-29T12:40:45Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Dynamically Scaled Temperature in Self-Supervised Contrastive Learning [11.133502139934437]
We focus on improving the performance of InfoNCE loss in self-supervised learning by proposing a novel cosine similarity dependent temperature scaling function.
Experimental evidence shows that the proposed framework outperforms the contrastive loss-based SSL algorithms.
arXiv Detail & Related papers (2023-08-02T13:31:41Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Adaptive Spike-Like Representation of EEG Signals for Sleep Stages
Scoring [6.644008481573341]
We propose an adaptive scheme to encode, filter and accumulate the input signals and the weight features by the half-Gaussian probabilities of signal intensities.
Experiments on the largest public dataset against state-of-the-art methods validate the effectiveness of our proposed method and reveal promising future directions.
arXiv Detail & Related papers (2022-04-02T11:21:49Z) - Towards Balanced Learning for Instance Recognition [149.76724446376977]
We propose Libra R-CNN, a framework towards balanced learning for instance recognition.
It integrates IoU-balanced sampling, balanced feature pyramid, and objective re-weighting, respectively for reducing the imbalance at sample, feature, and objective level.
arXiv Detail & Related papers (2021-08-23T13:40:45Z) - Unsupervised neural adaptation model based on optimal transport for
spoken language identification [54.96267179988487]
Due to the mismatch of statistical distributions of acoustic speech between training and testing sets, the performance of spoken language identification (SLID) could be drastically degraded.
We propose an unsupervised neural adaptation model to deal with the distribution mismatch problem for SLID.
arXiv Detail & Related papers (2020-12-24T07:37:19Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.