Vision Transformers and YoloV5 based Driver Drowsiness Detection
Framework
- URL: http://arxiv.org/abs/2209.01401v1
- Date: Sat, 3 Sep 2022 11:37:41 GMT
- Title: Vision Transformers and YoloV5 based Driver Drowsiness Detection
Framework
- Authors: Ghanta Sai Krishna, Kundrapu Supriya, Jai Vardhan and Mallikharjuna
Rao K
- Abstract summary: This paper introduces a novel framework based on vision transformers and YoloV5 architectures for driver drowsiness recognition.
A custom YoloV5 pre-trained architecture is proposed for face extraction with the aim of extracting Region of Interest (ROI)
For the further evaluation, proposed framework is tested on a custom dataset of 39 participants in various light circumstances and achieved 95.5% accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human drivers have distinct driving techniques, knowledge, and sentiments due
to unique driving traits. Driver drowsiness has been a serious issue
endangering road safety; therefore, it is essential to design an effective
drowsiness detection algorithm to bypass road accidents. Miscellaneous research
efforts have been approached the problem of detecting anomalous human driver
behaviour to examine the frontal face of the driver and automobile dynamics via
computer vision techniques. Still, the conventional methods cannot capture
complicated driver behaviour features. However, with the origin of deep
learning architectures, a substantial amount of research has also been executed
to analyze and recognize driver's drowsiness using neural network algorithms.
This paper introduces a novel framework based on vision transformers and YoloV5
architectures for driver drowsiness recognition. A custom YoloV5 pre-trained
architecture is proposed for face extraction with the aim of extracting Region
of Interest (ROI). Owing to the limitations of previous architectures, this
paper introduces vision transformers for binary image classification which is
trained and validated on a public dataset UTA-RLDD. The model had achieved
96.2\% and 97.4\% as it's training and validation accuracies respectively. For
the further evaluation, proposed framework is tested on a custom dataset of 39
participants in various light circumstances and achieved 95.5\% accuracy. The
conducted experimentations revealed the significant potential of our framework
for practical applications in smart transportation systems.
Related papers
- VigilEye -- Artificial Intelligence-based Real-time Driver Drowsiness Detection [0.5549794481031468]
This study presents a novel driver drowsiness detection system that combines deep learning techniques with the OpenCV framework.
The system uses facial landmarks extracted from the driver's face as input to Convolutional Neural Networks trained to recognise drowsiness patterns.
The proposed system has the potential to enhance road safety by providing timely alerts to prevent accidents caused by driver fatigue.
arXiv Detail & Related papers (2024-06-21T20:53:49Z) - Enhancing Road Safety: Real-Time Detection of Driver Distraction through Convolutional Neural Networks [0.0]
This study seeks to identify the most efficient model for real-time detection of driver distractions.
The ultimate aim is to incorporate the findings into vehicle safety systems, significantly boosting their capability to prevent accidents triggered by inattention.
arXiv Detail & Related papers (2024-05-28T03:34:55Z) - Improving automatic detection of driver fatigue and distraction using
machine learning [0.0]
Driver fatigue and distracted driving are important factors in traffic accidents.
We present techniques for simultaneously detecting fatigue and distracted driving behaviors using vision-based and machine learning-based approaches.
arXiv Detail & Related papers (2024-01-04T06:33:46Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - Smart City Transportation: Deep Learning Ensemble Approach for Traffic
Accident Detection [0.0]
We introduce the I3D-CONVLSTM2D model architecture, a lightweight solution tailored explicitly for accident detection in smart city traffic surveillance systems.
Our experimental study's empirical analysis underscores our approach's efficacy, with the I3D-CONVLSTM2D RGB + Optical-Flow (Trainable) model outperforming its counterparts, achieving an impressive 87% Mean Average Precision (MAP)
Our research illuminates the path towards a sophisticated vision-based accident detection system primed for real-time integration into edge IoT devices within smart urban infrastructures.
arXiv Detail & Related papers (2023-10-16T03:47:08Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z) - Improved YOLOv3 Object Classification in Intelligent Transportation
System [29.002873450422083]
An algorithm based on YOLOv3 is proposed to realize the detection and classification of vehicles, drivers, and people on the highway.
The model has a good performance and is robust to road blocking, different attitudes, and extreme lighting.
arXiv Detail & Related papers (2020-04-08T11:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.