Multi-Attention Fusion Drowsy Driving Detection Model
- URL: http://arxiv.org/abs/2312.17052v1
- Date: Thu, 28 Dec 2023 14:53:32 GMT
- Title: Multi-Attention Fusion Drowsy Driving Detection Model
- Authors: Shulei QU, Zhenguo Gao, Xiaoxiao Wu, Yuanyuan Qiu
- Abstract summary: We introduce a novel approach called the Multi-Attention Fusion Drowsy Driving Detection Model (MAF)
Our proposed model achieves an impressive driver drowsiness detection accuracy of 96.8%.
- Score: 1.2043574473965317
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Drowsy driving represents a major contributor to traffic accidents, and the
implementation of driver drowsy driving detection systems has been proven to
significantly reduce the occurrence of such accidents. Despite the development
of numerous drowsy driving detection algorithms, many of them impose specific
prerequisites such as the availability of complete facial images, optimal
lighting conditions, and the use of RGB images. In our study, we introduce a
novel approach called the Multi-Attention Fusion Drowsy Driving Detection Model
(MAF). MAF is aimed at significantly enhancing classification performance,
especially in scenarios involving partial facial occlusion and low lighting
conditions. It accomplishes this by capitalizing on the local feature
extraction capabilities provided by multi-attention fusion, thereby enhancing
the algorithm's overall robustness. To enhance our dataset, we collected
real-world data that includes both occluded and unoccluded faces captured under
nighttime and daytime lighting conditions. We conducted a comprehensive series
of experiments using both publicly available datasets and our self-built data.
The results of these experiments demonstrate that our proposed model achieves
an impressive driver drowsiness detection accuracy of 96.8%.
Related papers
- Cross-Camera Distracted Driver Classification through Feature Disentanglement and Contrastive Learning [13.613407983544427]
We introduce a robust model designed to withstand changes in camera position within the vehicle.
Our Driver Behavior Monitoring Network (DBMNet) relies on a lightweight backbone and integrates a disentanglement module.
Experiments conducted on the daytime and nighttime subsets of the 100-Driver dataset validate the effectiveness of our approach.
arXiv Detail & Related papers (2024-11-20T10:27:12Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Radar Enlighten the Dark: Enhancing Low-Visibility Perception for
Automated Vehicles with Camera-Radar Fusion [8.946655323517094]
We propose a novel transformer-based 3D object detection model "REDFormer" to tackle low visibility conditions.
Our model outperforms state-of-the-art (SOTA) models on classification and detection accuracy.
arXiv Detail & Related papers (2023-05-27T00:47:39Z) - An Outlier Exposure Approach to Improve Visual Anomaly Detection
Performance for Mobile Robots [76.36017224414523]
We consider the problem of building visual anomaly detection systems for mobile robots.
Standard anomaly detection models are trained using large datasets composed only of non-anomalous data.
We tackle the problem of exploiting these data to improve the performance of a Real-NVP anomaly detection model.
arXiv Detail & Related papers (2022-09-20T15:18:13Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Does Thermal data make the detection systems more reliable? [1.2891210250935146]
We propose a comprehensive detection system based on a multimodal-collaborative framework.
This framework learns from both RGB (from visual cameras) and thermal (from Infrared cameras) data.
Our empirical results show that while the improvement in accuracy is nominal, the value lies in challenging and extremely difficult edge cases.
arXiv Detail & Related papers (2021-11-09T15:04:34Z) - Comparison of Object Detection Algorithms Using Video and Thermal Images
Collected from a UAS Platform: An Application of Drones in Traffic Management [2.9932638148627104]
This study explores real-time vehicle detection algorithms on both visual and infrared cameras.
Red Green Blue (RGB) videos and thermal images were collected from a UAS platform along highways in the Tampa, Florida, area.
arXiv Detail & Related papers (2021-09-27T16:57:09Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Anomalous Motion Detection on Highway Using Deep Learning [14.617786106427834]
This paper presents a new anomaly detection dataset - the Highway Traffic Anomaly (HTA) dataset.
We evaluate state-of-the-art deep learning anomaly detection models and propose novel variations to these methods.
arXiv Detail & Related papers (2020-06-15T05:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.