Towards Transparent Robotic Planning via Contrastive Explanations
- URL: http://arxiv.org/abs/2003.07425v1
- Date: Mon, 16 Mar 2020 19:44:31 GMT
- Title: Towards Transparent Robotic Planning via Contrastive Explanations
- Authors: Shenghui Chen, Kayla Boggess and Lu Feng
- Abstract summary: We formalize the notion of contrastive explanations for robotic planning policies based on Markov decision processes.
We present methods for the automated generation of contrastive explanations with three key factors: selectiveness, constrictiveness, and responsibility.
- Score: 1.7231251035416644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing explanations of chosen robotic actions can help to increase the
transparency of robotic planning and improve users' trust. Social sciences
suggest that the best explanations are contrastive, explaining not just why one
action is taken, but why one action is taken instead of another. We formalize
the notion of contrastive explanations for robotic planning policies based on
Markov decision processes, drawing on insights from the social sciences. We
present methods for the automated generation of contrastive explanations with
three key factors: selectiveness, constrictiveness, and responsibility. The
results of a user study with 100 participants on the Amazon Mechanical Turk
platform show that our generated contrastive explanations can help to increase
users' understanding and trust of robotic planning policies while reducing
users' cognitive burden.
Related papers
- Dynamic Explanation Emphasis in Human-XAI Interaction with Communication Robot [2.6396287656676725]
DynEmph is a method for a communication robot to decide where to emphasize XAI-generated explanations with physical expressions.
It predicts the effect of emphasizing certain points on a user and aims to minimize the expected difference between predicted user decisions and AI-suggested ones.
arXiv Detail & Related papers (2024-03-21T16:50:12Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios [1.671353192305391]
We make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods.
arXiv Detail & Related papers (2022-07-07T10:40:24Z) - Understanding a Robot's Guiding Ethical Principles via Automatically
Generated Explanations [4.393037165265444]
We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations.
Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot's plan.
arXiv Detail & Related papers (2022-06-20T22:55:00Z) - Two ways to make your robot proactive: reasoning about human intentions,
or reasoning about possible futures [69.03494351066846]
We investigate two ways to make robots proactive.
One way is to recognize humans' intentions and to act to fulfill them, like opening the door that you are about to cross.
The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them.
arXiv Detail & Related papers (2022-05-11T13:33:14Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Are We On The Same Page? Hierarchical Explanation Generation for
Planning Tasks in Human-Robot Teaming using Reinforcement Learning [0.0]
We argue that the agent-generated explanations should be abstracted to be aligned with the level of details the human teammate desires to maintain the recipient's cognitive load.
We show that hierarchical explanations achieved better task performance and behavior interpretability while reduced cognitive load.
arXiv Detail & Related papers (2020-12-22T02:14:52Z) - A Knowledge Driven Approach to Adaptive Assistance Using Preference
Reasoning and Explanation [3.8673630752805432]
We propose the robot uses Analogical Theory of Mind to infer what the user is trying to do.
If the user is unsure or confused, the robot provides the user with an explanation.
arXiv Detail & Related papers (2020-12-05T00:18:43Z) - Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability [62.03014078810652]
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states.
Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient.
arXiv Detail & Related papers (2020-10-05T18:16:20Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.