Improving Generalization of Drowsiness State Classification by
Domain-Specific Normalization
- URL: http://arxiv.org/abs/2312.09461v1
- Date: Wed, 15 Nov 2023 02:49:48 GMT
- Title: Improving Generalization of Drowsiness State Classification by
Domain-Specific Normalization
- Authors: Dong-Young Kim, Dong-Kyun Han, Seo-Hyeon Park, Geun-Deok Jang, and
Seong-Whan Lee
- Abstract summary: Abnormal driver states, particularly have been major concerns for road safety, emphasizing the importance of accurate drowsiness detection to prevent accidents.
Electroencephalogram (EEG) signals are recognized for their effectiveness in monitoring a driver's mental state by monitoring brain activities.
The challenge lies in the requirement for prior calibration due to the variation of EEG signals among and within individuals.
We propose a practical framework for classifying driver drowsiness states to improve accessibility and convenience.
- Score: 23.972427172296207
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Abnormal driver states, particularly have been major concerns for road
safety, emphasizing the importance of accurate drowsiness detection to prevent
accidents. Electroencephalogram (EEG) signals are recognized for their
effectiveness in monitoring a driver's mental state by monitoring brain
activities. However, the challenge lies in the requirement for prior
calibration due to the variation of EEG signals among and within individuals.
The necessity of calibration has made the brain-computer interface (BCI) less
accessible. We propose a practical generalized framework for classifying driver
drowsiness states to improve accessibility and convenience. We separate the
normalization process for each driver, treating them as individual domains. The
goal of developing a general model is similar to that of domain generalization.
The framework considers the statistics of each domain separately since they
vary among domains. We experimented with various normalization methods to
enhance the ability to generalize across subjects, i.e. the model's
generalization performance of unseen domains. The experiments showed that
applying individual domain-specific normalization yielded an outstanding
improvement in generalizability. Furthermore, our framework demonstrates the
potential and accessibility by removing the need for calibration in BCI
applications.
Related papers
- Robust Distribution Alignment for Industrial Anomaly Detection under Distribution Shift [51.24522135151649]
Anomaly detection plays a crucial role in quality control for industrial applications.
Existing methods attempt to address domain shifts by training generalizable models.
Our proposed method demonstrates superior results compared with state-of-the-art anomaly detection and domain adaptation methods.
arXiv Detail & Related papers (2025-03-19T05:25:52Z) - Calibration-Free Driver Drowsiness Classification based on
Manifold-Level Augmentation [2.6248092118543567]
Monitoring drivers' drowsiness levels by electroencephalogram (EEG) may prevent road accidents.
calibration is required in advance because EEG signals vary between and within subjects.
This paper proposes a calibration-free framework for driver drowsiness state classification using manifold-level augmentation.
arXiv Detail & Related papers (2022-12-14T08:51:12Z) - When Neural Networks Fail to Generalize? A Model Sensitivity Perspective [82.36758565781153]
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG)
We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity"
We propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies.
arXiv Detail & Related papers (2022-12-01T20:15:15Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Calibrated Feature Decomposition for Generalizable Person
Re-Identification [82.64133819313186]
Calibrated Feature Decomposition (CFD) module focuses on improving the generalization capacity for person re-identification.
A calibrated-and-standardized Batch normalization (CSBN) is designed to learn calibrated person representation.
arXiv Detail & Related papers (2021-11-27T17:12:43Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Adaptive Normalized Representation Learning for Generalizable Face
Anti-Spoofing [45.37463812739095]
Face anti-spoofing (FAS) based on domain generalization (DG) has drawn growing attention due to its robustness.
We propose a novel perspective of face anti-spoofing that focuses on the normalization selection in the feature extraction process.
arXiv Detail & Related papers (2021-08-05T15:04:33Z) - Adversarially Adaptive Normalization for Single Domain Generalization [71.80587939738672]
We propose a generic normalization approach, adaptive standardization and rescaling normalization (ASR-Norm)
ASR-Norm learns both the standardization and rescaling statistics via neural networks.
We show that ASR-Norm can bring consistent improvement to the state-of-the-art ADA approaches.
arXiv Detail & Related papers (2021-06-01T23:58:23Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.