Driver Glance Classification In-the-wild: Towards Generalization Across
Domains and Subjects
- URL: http://arxiv.org/abs/2012.02906v2
- Date: Wed, 20 Jan 2021 03:33:19 GMT
- Title: Driver Glance Classification In-the-wild: Towards Generalization Across
Domains and Subjects
- Authors: Sandipan Banerjee, Ajjen Joshi, Jay Turcot, Bryan Reimer and Taniya
Mishra
- Abstract summary: Driver assistance systems (ADAS) with the ability to detect driver distraction can help prevent accidents and improve driver safety.
We propose a model that takes as input a patch of the driver's face along with a crop of the eye-region and classifies their glance into 6 coarse regions-of-interest (ROIs) in the vehicle.
- Score: 5.562102367018285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distracted drivers are dangerous drivers. Equipping advanced driver
assistance systems (ADAS) with the ability to detect driver distraction can
help prevent accidents and improve driver safety. In order to detect driver
distraction, an ADAS must be able to monitor their visual attention. We propose
a model that takes as input a patch of the driver's face along with a crop of
the eye-region and classifies their glance into 6 coarse regions-of-interest
(ROIs) in the vehicle. We demonstrate that an hourglass network, trained with
an additional reconstruction loss, allows the model to learn stronger
contextual feature representations than a traditional encoder-only
classification module. To make the system robust to subject-specific variations
in appearance and behavior, we design a personalized hourglass model tuned with
an auxiliary input representing the driver's baseline glance behavior. Finally,
we present a weakly supervised multi-domain training regimen that enables the
hourglass to jointly learn representations from different domains (varying in
camera type, angle), utilizing unlabeled samples and thereby reducing
annotation cost.
Related papers
- Towards Infusing Auxiliary Knowledge for Distracted Driver Detection [11.816566371802802]
Distracted driving is a leading cause of road accidents globally.
We propose KiD3, a novel method for distracted driver detection (DDD) by infusing auxiliary knowledge about semantic relations between entities in a scene and the structural configuration of the driver's pose.
Specifically, we construct a unified framework that integrates the scene graphs, and driver pose information with the visual cues in video frames to create a holistic representation of the driver's actions.
arXiv Detail & Related papers (2024-08-29T15:28:42Z) - Federated Learning for Drowsiness Detection in Connected Vehicles [0.19116784879310028]
Driver monitoring systems can assist in determining the driver's state.
Driver drowsiness detection presents a potential solution.
transmitting the data to a central machine for model training is impractical due to the large data size and privacy concerns.
We propose a federated learning framework for drowsiness detection within a vehicular network, leveraging the YawDD dataset.
arXiv Detail & Related papers (2024-05-06T09:39:13Z) - Visual Saliency Detection in Advanced Driver Assistance Systems [7.455416595124159]
We present an intelligent system that combines a drowsiness detection system for drivers with a scene comprehension pipeline based on saliency.
We employ an innovative biosensor embedded on the car steering wheel to monitor the driver.
A dedicated 1D temporal deep convolutional network has been devised to classify the collected PPG time-series.
arXiv Detail & Related papers (2023-07-26T15:41:54Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [50.936478241688114]
Nonobjective driving experience is difficult to model, so a mechanism simulating the driver experience accumulation procedure is absent in existing methods.
We propose a FeedBack Loop Network (FBLNet), which attempts to model the driving experience accumulation procedure.
Our model exhibits a solid advantage over existing methods, achieving an outstanding performance improvement on two driver attention benchmark datasets.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - TransDARC: Transformer-based Driver Activity Recognition with Latent
Space Feature Calibration [31.908276711898548]
We present a vision-based framework for recognizing secondary driver behaviours based on visual transformers and an augmented feature distribution calibration module.
Our framework consistently leads to better recognition rates, surpassing previous state-of-the-art results of the public Drive&Act benchmark on all levels.
arXiv Detail & Related papers (2022-03-02T08:14:06Z) - Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry [55.11913183006984]
We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
arXiv Detail & Related papers (2021-03-20T16:29:01Z) - Driver Drowsiness Classification Based on Eye Blink and Head Movement
Features Using the k-NN Algorithm [8.356765961526955]
This work is to extend the driver drowsiness detection in vehicles using signals of a driver monitoring camera.
For this purpose, 35 features related to the driver's eye blinking behavior and head movements are extracted in driving simulator experiments.
A concluding analysis of the best performing feature sets yields valuable insights about the influence of drowsiness on the driver's blink behavior and head movements.
arXiv Detail & Related papers (2020-09-28T12:37:38Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.