On the Forces of Driver Distraction: Explainable Predictions for the
Visual Demand of In-Vehicle Touchscreen Interactions
- URL: http://arxiv.org/abs/2301.02065v1
- Date: Thu, 5 Jan 2023 13:50:26 GMT
- Title: On the Forces of Driver Distraction: Explainable Predictions for the
Visual Demand of In-Vehicle Touchscreen Interactions
- Authors: Patrick Ebel, Christoph Lingenfelder, Andreas Vogelsang
- Abstract summary: In-vehicle touchscreen Human-Machine Interfaces (HMIs) must be as little distracting as possible.
This paper presents a machine learning method that predicts the visual demand of in-vehicle touchscreen interactions.
- Score: 5.375634674639956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With modern infotainment systems, drivers are increasingly tempted to engage
in secondary tasks while driving. Since distracted driving is already one of
the main causes of fatal accidents, in-vehicle touchscreen Human-Machine
Interfaces (HMIs) must be as little distracting as possible. To ensure that
these systems are safe to use, they undergo elaborate and expensive empirical
testing, requiring fully functional prototypes. Thus, early-stage methods
informing designers about the implication their design may have on driver
distraction are of great value. This paper presents a machine learning method
that, based on anticipated usage scenarios, predicts the visual demand of
in-vehicle touchscreen interactions and provides local and global explanations
of the factors influencing drivers' visual attention allocation. The approach
is based on large-scale natural driving data continuously collected from
production line vehicles and employs the SHapley Additive exPlanation (SHAP)
method to provide explanations leveraging informed design decisions. Our
approach is more accurate than related work and identifies interactions during
which long glances occur with 68 % accuracy and predicts the total glance
duration with a mean error of 2.4 s. Our explanations replicate the results of
various recent studies and provide fast and easily accessible insights into the
effect of UI elements, driving automation, and vehicle speed on driver
distraction. The system can not only help designers to evaluate current designs
but also help them to better anticipate and understand the implications their
design decisions might have on future designs.
Related papers
- Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Real-Time Detection and Analysis of Vehicles and Pedestrians using Deep Learning [0.0]
Current traffic monitoring systems confront major difficulty in recognizing small objects and pedestrians effectively in real-time.
Our project focuses on the creation and validation of an advanced deep-learning framework capable of processing complex visual input for precise, real-time recognition of cars and people.
The YOLOv8 Large version proved to be the most effective, especially in pedestrian recognition, with great precision and robustness.
arXiv Detail & Related papers (2024-04-11T18:42:14Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - In-Vehicle Interface Adaptation to Environment-Induced Cognitive
Workload [55.41644538483948]
In-vehicle human-machine interfaces (HMIs) have evolved throughout the years, providing more and more functions.
To tackle this problem, we propose using adaptive HMIs that change according to the mental workload of the driver.
arXiv Detail & Related papers (2022-10-20T13:42:25Z) - Effects of Augmented-Reality-Based Assisting Interfaces on Drivers'
Object-wise Situational Awareness in Highly Autonomous Vehicles [13.311257059976692]
We focus on a user interface based on augmented reality (AR), which can highlight potential hazards on the road.
Our study results show that the effects of highlighting on drivers' SA varied by traffic densities, object locations and object types.
arXiv Detail & Related papers (2022-06-06T03:23:34Z) - Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study [38.65843674620544]
We introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions.
A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology.
arXiv Detail & Related papers (2021-12-07T23:42:21Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - DRIVE: Deep Reinforced Accident Anticipation with Visual Explanation [36.350348194248014]
Traffic accident anticipation aims to accurately and promptly predict the occurrence of a future accident from dashcam videos.
Existing approaches typically focus on capturing the cues of spatial and temporal context before a future accident occurs.
We propose Deep ReInforced accident anticipation with Visual Explanation, named DRIVE.
arXiv Detail & Related papers (2021-07-21T16:33:21Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.