Effects of Explanation Specificity on Passengers in Autonomous Driving
- URL: http://arxiv.org/abs/2307.00633v1
- Date: Sun, 2 Jul 2023 18:40:05 GMT
- Title: Effects of Explanation Specificity on Passengers in Autonomous Driving
- Authors: Daniel Omeiza, Raunak Bhattacharyya, Nick Hawes, Marina Jirotka, Lars
Kunze
- Abstract summary: We investigate the effects of natural language explanations' specificity on passengers in autonomous driving.
We generated auditory natural language explanations with different levels of specificity (abstract and specific)
Our results showed that both abstract and specific explanations had similar positive effects on passengers' perceived safety and the feeling of anxiety.
- Score: 9.855051716204002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The nature of explanations provided by an explainable AI algorithm has been a
topic of interest in the explainable AI and human-computer interaction
community. In this paper, we investigate the effects of natural language
explanations' specificity on passengers in autonomous driving. We extended an
existing data-driven tree-based explainer algorithm by adding a rule-based
option for explanation generation. We generated auditory natural language
explanations with different levels of specificity (abstract and specific) and
tested these explanations in a within-subject user study (N=39) using an
immersive physical driving simulation setup. Our results showed that both
abstract and specific explanations had similar positive effects on passengers'
perceived safety and the feeling of anxiety. However, the specific explanations
influenced the desire of passengers to takeover driving control from the
autonomous vehicle (AV), while the abstract explanations did not. We conclude
that natural language auditory explanations are useful for passengers in
autonomous driving, and their specificity levels could influence how much
in-vehicle participants would wish to be in control of the driving activity.
Related papers
- A Transparency Paradox? Investigating the Impact of Explanation Specificity and Autonomous Vehicle Perceptual Inaccuracies on Passengers [14.394590436638232]
We examined the effects of transparency mediated through varying levels of explanation specificity in autonomous driving.
Specifically, our study focused on how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle.
Our findings showed that passengers felt safer with specific explanations when the vehicle's perception system had minimal errors, while abstract explanations that hid perception errors led to lower feelings of safety.
arXiv Detail & Related papers (2024-08-16T14:59:00Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior [22.138074429937795]
Cognitive science can help us understand which explanations people might expect, and in which format they frame these explanations.
We report empirical data from two surveys on how people explain the behavior of autonomous vehicles in 14 unique scenarios.
Participants deemed teleological explanations significantly better quality than counterfactual ones.
arXiv Detail & Related papers (2024-03-11T11:48:50Z) - DME-Driver: Integrating Human Decision Logic and 3D Scene Perception in
Autonomous Driving [65.04871316921327]
This paper introduces a new autonomous driving system that enhances the performance and reliability of autonomous driving system.
DME-Driver utilizes a powerful vision language model as the decision-maker and a planning-oriented perception model as the control signal generator.
By leveraging this dataset, our model achieves high-precision planning accuracy through a logical thinking process.
arXiv Detail & Related papers (2024-01-08T03:06:02Z) - Driving through the Concept Gridlock: Unraveling Explainability
Bottlenecks in Automated Driving [22.21822829138535]
We propose a new approach using concept bottlenecks as visual features for control command predictions and explanations of user and vehicle behavior.
We learn a human-understandable concept layer that we use to explain sequential driving scenes while learning vehicle control commands.
This approach can then be used to determine whether a change in a preferred gap or steering commands from a human (or autonomous vehicle) is led by an external stimulus or change in preferences.
arXiv Detail & Related papers (2023-10-25T13:39:04Z) - Studying the Impact of Semi-Cooperative Drivers on Overall Highway Flow [76.38515853201116]
Semi-cooperative behaviors are intrinsic properties of human drivers and should be considered for autonomous driving.
New autonomous planners can consider the social value orientation (SVO) of human drivers to generate socially-compliant trajectories.
We present study of implicit semi-cooperative driving where agents deploy a game-theoretic version of iterative best response.
arXiv Detail & Related papers (2023-04-23T16:01:36Z) - From Spoken Thoughts to Automated Driving Commentary: Predicting and
Explaining Intelligent Vehicles' Actions [10.557942353553859]
In commentary driving, drivers verbalise their observations, assessments and intentions.
By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings.
In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions.
arXiv Detail & Related papers (2022-04-19T19:39:13Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Studying Person-Specific Pointing and Gaze Behavior for Multimodal
Referencing of Outside Objects from a Moving Vehicle [58.720142291102135]
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.
Existing outside-the-vehicle referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints.
We investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects.
arXiv Detail & Related papers (2020-09-23T14:56:19Z) - To Explain or Not to Explain: A Study on the Necessity of Explanations
for Autonomous Vehicles [26.095533634997786]
We present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips.
Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary.
In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.
arXiv Detail & Related papers (2020-06-21T00:38:24Z) - Explainable Object-induced Action Decision for Autonomous Vehicles [53.59781838748779]
A new paradigm is proposed for autonomous driving.
It is inspired by how humans solve the problem.
A CNN architecture is proposed to solve this problem.
arXiv Detail & Related papers (2020-03-20T17:33:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.