Keep Your AI-es on the Road: Tackling Distracted Driver Detection with
Convolutional Neural Networks and Targeted Data Augmentation
- URL: http://arxiv.org/abs/2006.10955v2
- Date: Wed, 24 Jun 2020 01:05:55 GMT
- Title: Keep Your AI-es on the Road: Tackling Distracted Driver Detection with
Convolutional Neural Networks and Targeted Data Augmentation
- Authors: Nikka Mofid, Jasmine Bayrooti, Shreya Ravi
- Abstract summary: Distracted driving is one of the leading cause of motor accidents and deaths in the world.
In our study, we aim to build a robust multi-class classifier to detect and identify different forms of driver inattention.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: According to the World Health Organization, distracted driving is one of the
leading cause of motor accidents and deaths in the world. In our study, we
tackle the problem of distracted driving by aiming to build a robust
multi-class classifier to detect and identify different forms of driver
inattention using the State Farm Distracted Driving Dataset. We utilize
combinations of pretrained image classification models, classical data
augmentation, OpenCV based image preprocessing and skin segmentation
augmentation approaches. Our best performing model combines several
augmentation techniques, including skin segmentation, facial blurring, and
classical augmentation techniques. This model achieves an approximately 15%
increase in F1 score over the baseline, thus showing the promise in these
techniques in enhancing the power of neural networks for the task of distracted
driver detection.
Related papers
- RainSD: Rain Style Diversification Module for Image Synthesis
Enhancement using Feature-Level Style Distribution [5.500457283114346]
This paper presents a synthetic road dataset with sensor blockage generated from real road dataset BDD100K.
Using this dataset, the degradation of diverse multi-task networks for autonomous driving has been thoroughly evaluated and analyzed.
The tendency of the performance degradation of deep neural network-based perception systems for autonomous vehicle has been analyzed in depth.
arXiv Detail & Related papers (2023-12-31T11:30:42Z) - G-MEMP: Gaze-Enhanced Multimodal Ego-Motion Prediction in Driving [71.9040410238973]
We focus on inferring the ego trajectory of a driver's vehicle using their gaze data.
Next, we develop G-MEMP, a novel multimodal ego-trajectory prediction network that combines GPS and video input with gaze data.
The results show that G-MEMP significantly outperforms state-of-the-art methods in both benchmarks.
arXiv Detail & Related papers (2023-12-13T23:06:30Z) - A Novel Driver Distraction Behavior Detection Method Based on
Self-supervised Learning with Masked Image Modeling [5.1680226874942985]
Driver distraction causes a significant number of traffic accidents every year, resulting in economic losses and casualties.
Driver distraction detection primarily relies on traditional convolutional neural networks (CNN) and supervised learning methods.
This paper proposes a new self-supervised learning method based on masked image modeling for driver distraction behavior detection.
arXiv Detail & Related papers (2023-06-01T10:53:32Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Collision Detection: An Improved Deep Learning Approach Using SENet and
ResNext [6.736699393205048]
In this article, a deep-learning-based model comprising of ResNext architecture with SENet blocks is proposed.
The proposed model outperforms the existing baseline models achieving a ROC-AUC of 0.91 using a significantly less proportion of the GTACrash synthetic data for training.
arXiv Detail & Related papers (2022-01-13T02:10:14Z) - A Computer Vision-Based Approach for Driver Distraction Recognition
using Deep Learning and Genetic Algorithm Based Ensemble [1.8907108368038217]
distractions caused by mobile phones and other wireless devices pose a potential risk to road safety.
Our study aims to aid the already existing techniques in driver posture recognition by improving the performance in the driver distraction classification problem.
We present an approach using a genetic algorithm-based ensemble of six independent deep neural architectures, namely, AlexNet, VGG-16, EfficientNet B0, Vanilla CNN, Modified DenseNet, and InceptionV3 + BiLSTM.
arXiv Detail & Related papers (2021-07-28T13:39:31Z) - Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification [8.007800530105191]
We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
arXiv Detail & Related papers (2021-02-11T04:33:43Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.