Effects of Augmented-Reality-Based Assisting Interfaces on Drivers'
Object-wise Situational Awareness in Highly Autonomous Vehicles
- URL: http://arxiv.org/abs/2206.02332v1
- Date: Mon, 6 Jun 2022 03:23:34 GMT
- Title: Effects of Augmented-Reality-Based Assisting Interfaces on Drivers'
Object-wise Situational Awareness in Highly Autonomous Vehicles
- Authors: Xiaofeng Gao, Xingwei Wu, Samson Ho, Teruhisa Misu, Kumar Akash
- Abstract summary: We focus on a user interface based on augmented reality (AR), which can highlight potential hazards on the road.
Our study results show that the effects of highlighting on drivers' SA varied by traffic densities, object locations and object types.
- Score: 13.311257059976692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although partially autonomous driving (AD) systems are already available in
production vehicles, drivers are still required to maintain a sufficient level
of situational awareness (SA) during driving. Previous studies have shown that
providing information about the AD's capability using user interfaces can
improve the driver's SA. However, displaying too much information increases the
driver's workload and can distract or overwhelm the driver. Therefore, to
design an efficient user interface (UI), it is necessary to understand its
effect under different circumstances. In this paper, we focus on a UI based on
augmented reality (AR), which can highlight potential hazards on the road. To
understand the effect of highlighting on drivers' SA for objects with different
types and locations under various traffic densities, we conducted an in-person
experiment with 20 participants on a driving simulator. Our study results show
that the effects of highlighting on drivers' SA varied by traffic densities,
object locations and object types. We believe our study can provide guidance in
selecting which object to highlight for the AR-based driver-assistance
interface to optimize SA for drivers driving and monitoring partially
autonomous vehicles.
Related papers
- Towards Infusing Auxiliary Knowledge for Distracted Driver Detection [11.816566371802802]
Distracted driving is a leading cause of road accidents globally.
We propose KiD3, a novel method for distracted driver detection (DDD) by infusing auxiliary knowledge about semantic relations between entities in a scene and the structural configuration of the driver's pose.
Specifically, we construct a unified framework that integrates the scene graphs, and driver pose information with the visual cues in video frames to create a holistic representation of the driver's actions.
arXiv Detail & Related papers (2024-08-29T15:28:42Z) - Learning Driver Models for Automated Vehicles via Knowledge Sharing and
Personalization [2.07180164747172]
This paper describes a framework for learning Automated Vehicles (AVs) driver models via knowledge sharing between vehicles and personalization.
It finds several applications across transportation engineering including intelligent transportation systems, traffic management, and vehicle-to-vehicle communication.
arXiv Detail & Related papers (2023-08-31T17:18:15Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - On the Forces of Driver Distraction: Explainable Predictions for the
Visual Demand of In-Vehicle Touchscreen Interactions [5.375634674639956]
In-vehicle touchscreen Human-Machine Interfaces (HMIs) must be as little distracting as possible.
This paper presents a machine learning method that predicts the visual demand of in-vehicle touchscreen interactions.
arXiv Detail & Related papers (2023-01-05T13:50:26Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - In-Vehicle Interface Adaptation to Environment-Induced Cognitive
Workload [55.41644538483948]
In-vehicle human-machine interfaces (HMIs) have evolved throughout the years, providing more and more functions.
To tackle this problem, we propose using adaptive HMIs that change according to the mental workload of the driver.
arXiv Detail & Related papers (2022-10-20T13:42:25Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Audiovisual Affect Assessment and Autonomous Automobiles: Applications [0.0]
This contribution aims to foresee according challenges and provide potential avenues towards affect modelling in a multimodal "audiovisual plus x" on the road context.
From the technical end, this concerns holistic passenger modelling and reliable diarisation of the individuals in a vehicle.
In conclusion, automated affect analysis has just matured to the point of applicability in autonomous vehicles in first selected use-cases.
arXiv Detail & Related papers (2022-03-14T20:39:02Z) - TransDARC: Transformer-based Driver Activity Recognition with Latent
Space Feature Calibration [31.908276711898548]
We present a vision-based framework for recognizing secondary driver behaviours based on visual transformers and an augmented feature distribution calibration module.
Our framework consistently leads to better recognition rates, surpassing previous state-of-the-art results of the public Drive&Act benchmark on all levels.
arXiv Detail & Related papers (2022-03-02T08:14:06Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.