To Explain or Not to Explain: A Study on the Necessity of Explanations
for Autonomous Vehicles
- URL: http://arxiv.org/abs/2006.11684v3
- Date: Thu, 10 Nov 2022 21:44:34 GMT
- Title: To Explain or Not to Explain: A Study on the Necessity of Explanations
for Autonomous Vehicles
- Authors: Yuan Shen, Shanduojiao Jiang, Yanlin Chen, Katie Driggs Campbell
- Abstract summary: We present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips.
Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary.
In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.
- Score: 26.095533634997786
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI, in the context of autonomous systems, like self-driving cars,
has drawn broad interests from researchers. Recent studies have found that
providing explanations for autonomous vehicles' actions has many benefits
(e.g., increased trust and acceptance), but put little emphasis on when an
explanation is needed and how the content of explanation changes with driving
context. In this work, we investigate which scenarios people need explanations
and how the critical degree of explanation shifts with situations and driver
types. Through a user experiment, we ask participants to evaluate how necessary
an explanation is and measure the impact on their trust in self-driving cars in
different contexts. Moreover, we present a self-driving explanation dataset
with first-person explanations and associated measures of the necessity for
1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Our
research reveals that driver types and driving scenarios dictate whether an
explanation is necessary. In particular, people tend to agree on the necessity
for near-crash events but hold different opinions on ordinary or anomalous
driving situations.
Related papers
- A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers [14.394590436638232]
We examined the effects of transparency mediated through varying levels of explanation specificity in autonomous driving.
Specifically, our study focused on how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle.
Our findings showed that passengers felt safer with specific explanations when the vehicle's perception system had minimal errors, while abstract explanations that hid perception errors led to lower feelings of safety.
arXiv Detail & Related papers (2024-08-16T14:59:00Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior [22.138074429937795]
Cognitive science can help us understand which explanations people might expect, and in which format they frame these explanations.
We report empirical data from two surveys on how people explain the behavior of autonomous vehicles in 14 unique scenarios.
Participants deemed teleological explanations significantly better quality than counterfactual ones.
arXiv Detail & Related papers (2024-03-11T11:48:50Z) - Effects of Explanation Specificity on Passengers in Autonomous Driving [9.855051716204002]
We investigate the effects of natural language explanations' specificity on passengers in autonomous driving.
We generated auditory natural language explanations with different levels of specificity (abstract and specific)
Our results showed that both abstract and specific explanations had similar positive effects on passengers' perceived safety and the feeling of anxiety.
arXiv Detail & Related papers (2023-07-02T18:40:05Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - From Spoken Thoughts to Automated Driving Commentary: Predicting and
Explaining Intelligent Vehicles' Actions [10.557942353553859]
In commentary driving, drivers verbalise their observations, assessments and intentions.
By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings.
In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions.
arXiv Detail & Related papers (2022-04-19T19:39:13Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Explainability of vision-based autonomous driving systems: Review and
challenges [33.720369945541805]
The need for explainability is strong in driving, a safety-critical application.
This survey gathers contributions from several research fields, namely computer vision, deep learning, autonomous driving, explainable AI (X-AI)
arXiv Detail & Related papers (2021-01-13T19:09:38Z) - Studying Person-Specific Pointing and Gaze Behavior for Multimodal
Referencing of Outside Objects from a Moving Vehicle [58.720142291102135]
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.
Existing outside-the-vehicle referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints.
We investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects.
arXiv Detail & Related papers (2020-09-23T14:56:19Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.