Order Matters: Generating Progressive Explanations for Planning Tasks in
Human-Robot Teaming
- URL: http://arxiv.org/abs/2004.07822v2
- Date: Sat, 17 Oct 2020 01:15:40 GMT
- Title: Order Matters: Generating Progressive Explanations for Planning Tasks in
Human-Robot Teaming
- Authors: Mehrdad Zakershahrak, Shashank Rao Marpally, Akshay Sharma, Ze Gong
and Yu Zhang
- Abstract summary: We aim to investigate effects during explanation generation when an explanation is broken into multiple parts that are communicated sequentially.
We first evaluate our approach on a scavenger-hunt domain to demonstrate its effectively capturing the humans' preferences.
Results confirmed our hypothesis that the process of understanding an explanation was a dynamic process.
- Score: 11.35869940310993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior work on generating explanations in a planning and decision-making
context has focused on providing the rationale behind an AI agent's decision
making. While these methods provide the right explanations from the explainer's
perspective, they fail to heed the cognitive requirement of understanding an
explanation from the explainee's (the human's) perspective. In this work, we
set out to address this issue by first considering the influence of information
order in an explanation, or the progressiveness of explanations. Intuitively,
progression builds later concepts on previous ones and is known to contribute
to better learning. In this work, we aim to investigate similar effects during
explanation generation when an explanation is broken into multiple parts that
are communicated sequentially. The challenge here lies in modeling the humans'
preferences for information order in receiving such explanations to assist
understanding. Given this sequential process, a formulation based on goal-based
MDP for generating progressive explanations is presented. The reward function
of this MDP is learned via inverse reinforcement learning based on explanations
that are retrieved via human subject studies. We first evaluated our approach
on a scavenger-hunt domain to demonstrate its effectively in capturing the
humans' preferences. Upon analyzing the results, it revealed something more
fundamental: the preferences arise strongly from both domain dependent and
independence features. The correlation with domain independent features pushed
us to verify this result further in an escape room domain. Results confirmed
our hypothesis that the process of understanding an explanation was a dynamic
process. The human preference that reflected this aspect corresponded exactly
to the progression for knowledge assimilation hidden deeper in our cognitive
process.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - How Well Do Feature-Additive Explainers Explain Feature-Additive
Predictors? [12.993027779814478]
We ask the question: can popular feature-additive explainers (e.g., LIME, SHAP, SHAPR, MAPLE, and PDP) explain feature-additive predictors?
Herein, we evaluate such explainers on ground truth that is analytically derived from the additive structure of a model.
Our results suggest that all explainers eventually fail to correctly attribute the importance of features, especially when a decision-making process involves feature interactions.
arXiv Detail & Related papers (2023-10-27T21:16:28Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Causal Explanations and XAI [8.909115457491522]
An important goal of Explainable Artificial Intelligence (XAI) is to compensate for mismatches by offering explanations.
I take a step further by formally defining the causal notions of sufficient explanations and counterfactual explanations.
I also touch upon the significance of this work for fairness in AI by showing how actual causation can be used to improve the idea of path-specific counterfactual fairness.
arXiv Detail & Related papers (2022-01-31T12:32:10Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Tell me why! -- Explanations support learning of relational and causal
structure [24.434551113103105]
Explanations play a considerable role in human learning, especially in areas that remain major challenges for AI.
We show that reinforcement learning agents might likewise benefit from explanations.
Our results suggest that learning from explanations is a powerful principle that could offer a promising path towards training more robust and general machine learning systems.
arXiv Detail & Related papers (2021-12-07T15:09:06Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Are We On The Same Page? Hierarchical Explanation Generation for
Planning Tasks in Human-Robot Teaming using Reinforcement Learning [0.0]
We argue that the agent-generated explanations should be abstracted to be aligned with the level of details the human teammate desires to maintain the recipient's cognitive load.
We show that hierarchical explanations achieved better task performance and behavior interpretability while reduced cognitive load.
arXiv Detail & Related papers (2020-12-22T02:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.