Comparison of Lightweight Methods for Vehicle Dynamics-Based Driver Drowsiness Detection
- URL: http://arxiv.org/abs/2506.07014v1
- Date: Sun, 08 Jun 2025 06:32:34 GMT
- Title: Comparison of Lightweight Methods for Vehicle Dynamics-Based Driver Drowsiness Detection
- Authors: Yutaro Nakagama, Daisuke Ishii, Kazuki Yoshizoe,
- Abstract summary: Driver drowsiness detection (DDD) prevents road accidents caused by driver fatigue.<n>Vehicle dynamics-based DDD has been proposed as a method that is both economical and high performance.
- Score: 1.9362572113472585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driver drowsiness detection (DDD) prevents road accidents caused by driver fatigue. Vehicle dynamics-based DDD has been proposed as a method that is both economical and high performance. However, there are concerns about the reliability of performance metrics and the reproducibility of many of the existing methods. For instance, some previous studies seem to have a data leakage issue among training and test datasets, and many do not openly provide the datasets they used. To this end, this paper aims to compare the performance of representative vehicle dynamics-based DDD methods under a transparent and fair framework that uses a public dataset. We first develop a framework for extracting features from an open dataset by Aygun et al. and performing DDD with lightweight ML models; the framework is carefully designed to support a variety of onfigurations. Second, we implement three existing representative methods and a concise random forest (RF)-based method in the framework. Finally, we report the results of experiments to verify the reproducibility and clarify the performance of DDD based on common metrics. Among the evaluated methods, the RF-based method achieved the highest accuracy of 88 %. Our findings imply the issues inherent in DDD methods developed in a non-standard manner, and demonstrate a high performance method implemented appropriately.
Related papers
- Daunce: Data Attribution through Uncertainty Estimation [7.809316632545256]
Training data attribution methods aim to identify which training examples influence a model's predictions on specific test data most.<n> Gradient-based TDA methods rely on gradients and second-order information, limiting their applicability at scale.<n>We introduce Daunce - a simple yet effective data attribution approach through uncertainty estimation.
arXiv Detail & Related papers (2025-05-29T08:08:38Z) - Enhanced OoD Detection through Cross-Modal Alignment of Multi-Modal Representations [2.992602379681373]
We show that multi-modal fine-tuning can achieve notable OoDD performance.<n>We propose a training objective that enhances cross-modal alignment by regularizing the distances between image and text embeddings of ID data.
arXiv Detail & Related papers (2025-03-24T16:00:21Z) - Cross-Camera Distracted Driver Classification through Feature Disentanglement and Contrastive Learning [13.613407983544427]
Driver Behavior Monitoring Network (DBMNet) relies on a lightweight backbone and integrates a disentanglement module to discard camera view information.<n>DBMNet achieves an improvement of 7% in Top-1 accuracy compared to existing approaches.
arXiv Detail & Related papers (2024-11-20T10:27:12Z) - SUDS: A Strategy for Unsupervised Drift Sampling [0.5437605013181142]
Supervised machine learning encounters concept drift, where the data distribution changes over time, degrading performance.
We present the Strategy for Drift Sampling (SUDS), a novel method that selects homogeneous samples for retraining using existing drift detection algorithms.
Our results demonstrate the efficacy of SUDS in optimizing labeled data use in dynamic environments.
arXiv Detail & Related papers (2024-11-05T10:55:29Z) - TDDBench: A Benchmark for Training data detection [42.49625153675721]
Training Data Detection (TDD) is a task aimed at determining whether a specific data instance is used to train a machine learning model.
There is no comprehensive benchmark to thoroughly evaluate the effectiveness of TDD methods.
We benchmark 21 different TDD methods across four detection paradigms and evaluate their performance from five perspectives.
arXiv Detail & Related papers (2024-11-05T05:48:48Z) - Distribution Discrepancy and Feature Heterogeneity for Active 3D Object Detection [18.285299184361598]
LiDAR-based 3D object detection is a critical technology for the development of autonomous driving and robotics.
We propose a novel and effective active learning (AL) method called Distribution Discrepancy and Feature Heterogeneity (DDFH)
It simultaneously considers geometric features and model embeddings, assessing information from both the instance-level and frame-level perspectives.
arXiv Detail & Related papers (2024-09-09T08:26:11Z) - Simple Ingredients for Offline Reinforcement Learning [86.1988266277766]
offline reinforcement learning algorithms have proven effective on datasets highly connected to the target downstream task.
We show that existing methods struggle with diverse data: their performance considerably deteriorates as data collected for related but different tasks is simply added to the offline buffer.
We show that scale, more than algorithmic considerations, is the key factor influencing performance.
arXiv Detail & Related papers (2024-03-19T18:57:53Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection [67.49587673594276]
We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
arXiv Detail & Related papers (2022-11-21T08:45:08Z) - Raising the Bar on the Evaluation of Out-of-Distribution Detection [88.70479625837152]
We define 2 categories of OoD data using the subtly different concepts of perceptual/visual and semantic similarity to in-distribution (iD) data.
We propose a GAN based framework for generating OoD samples from each of these 2 categories, given an iD dataset.
We show that a) state-of-the-art OoD detection methods which perform exceedingly well on conventional benchmarks are significantly less robust to our proposed benchmark.
arXiv Detail & Related papers (2022-09-24T08:48:36Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Distributed Dynamic Safe Screening Algorithms for Sparse Regularization [73.85961005970222]
We propose a new distributed dynamic safe screening (DDSS) method for sparsity regularized models and apply it on shared-memory and distributed-memory architecture respectively.
We prove that the proposed method achieves the linear convergence rate with lower overall complexity and can eliminate almost all the inactive features in a finite number of iterations almost surely.
arXiv Detail & Related papers (2022-04-23T02:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.