A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers
- URL: http://arxiv.org/abs/2408.08785v1
- Date: Fri, 16 Aug 2024 14:59:00 GMT
- Title: A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers
- Authors: Daniel Omeiza, Raunak Bhattacharyya, Marina Jirotka, Nick Hawes, Lars Kunze,
- Abstract summary: We examined the effects of transparency mediated through varying levels of explanation specificity in autonomous driving.
Specifically, our study focused on how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle.
Our findings showed that passengers felt safer with specific explanations when the vehicle's perception system had minimal errors, while abstract explanations that hid perception errors led to lower feelings of safety.
- Score: 14.394590436638232
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transparency in automated systems could be afforded through the provision of intelligible explanations. While transparency is desirable, might it lead to catastrophic outcomes (such as anxiety), that could outweigh its benefits? It's quite unclear how the specificity of explanations (level of transparency) influences recipients, especially in autonomous driving (AD). In this work, we examined the effects of transparency mediated through varying levels of explanation specificity in AD. We first extended a data-driven explainer model by adding a rule-based option for explanation generation in AD, and then conducted a within-subject lab study with 39 participants in an immersive driving simulator to study the effect of the resulting explanations. Specifically, our investigation focused on: (1) how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle when the vehicle perception system makes erroneous predictions; and (2) the relationship between passengers' behavioural cues and their feelings during the autonomous drives. Our findings showed that passengers felt safer with specific explanations when the vehicle's perception system had minimal errors, while abstract explanations that hid perception errors led to lower feelings of safety. Anxiety levels increased when specific explanations revealed perception system errors (high transparency). We found no significant link between passengers' visual patterns and their anxiety levels. Our study suggests that passengers prefer clear and specific explanations (high transparency) when they originate from autonomous vehicles (AVs) with optimal perceptual accuracy.
Related papers
- What Did My Car Say? Impact of Autonomous Vehicle Explanation Errors and Driving Context On Comfort, Reliance, Satisfaction, and Driving Confidence [7.623776951753322]
We tested how autonomous vehicle (AV) explanation errors affected a passenger's comfort in relying on an AV.
Despite identical driving, explanation errors reduced ratings of the AV's driving ability.
Prior trust and expertise were positively associated with outcome ratings.
arXiv Detail & Related papers (2024-09-09T15:41:53Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Effects of Explanation Specificity on Passengers in Autonomous Driving [9.855051716204002]
We investigate the effects of natural language explanations' specificity on passengers in autonomous driving.
We generated auditory natural language explanations with different levels of specificity (abstract and specific)
Our results showed that both abstract and specific explanations had similar positive effects on passengers' perceived safety and the feeling of anxiety.
arXiv Detail & Related papers (2023-07-02T18:40:05Z) - Studying the Impact of Semi-Cooperative Drivers on Overall Highway Flow [76.38515853201116]
Semi-cooperative behaviors are intrinsic properties of human drivers and should be considered for autonomous driving.
New autonomous planners can consider the social value orientation (SVO) of human drivers to generate socially-compliant trajectories.
We present study of implicit semi-cooperative driving where agents deploy a game-theoretic version of iterative best response.
arXiv Detail & Related papers (2023-04-23T16:01:36Z) - From Spoken Thoughts to Automated Driving Commentary: Predicting and
Explaining Intelligent Vehicles' Actions [10.557942353553859]
In commentary driving, drivers verbalise their observations, assessments and intentions.
By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings.
In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions.
arXiv Detail & Related papers (2022-04-19T19:39:13Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Giving Commands to a Self-Driving Car: How to Deal with Uncertain
Situations? [21.19657707748505]
This paper proposes a model that detects uncertain situations when a command is given and finds the visual objects causing it.
We argue that if the car could explain the objects in a human-like way, passengers could gain more confidence in the car's abilities.
arXiv Detail & Related papers (2021-06-08T10:21:11Z) - To Explain or Not to Explain: A Study on the Necessity of Explanations
for Autonomous Vehicles [26.095533634997786]
We present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips.
Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary.
In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.
arXiv Detail & Related papers (2020-06-21T00:38:24Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.