Are We On The Same Page? Hierarchical Explanation Generation for
Planning Tasks in Human-Robot Teaming using Reinforcement Learning
- URL: http://arxiv.org/abs/2012.11792v2
- Date: Fri, 26 Feb 2021 03:42:47 GMT
- Title: Are We On The Same Page? Hierarchical Explanation Generation for
Planning Tasks in Human-Robot Teaming using Reinforcement Learning
- Authors: Mehrdad Zakershahrak and Samira Ghodratnama
- Abstract summary: We argue that the agent-generated explanations should be abstracted to be aligned with the level of details the human teammate desires to maintain the recipient's cognitive load.
We show that hierarchical explanations achieved better task performance and behavior interpretability while reduced cognitive load.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing explanations is considered an imperative ability for an AI agent in
a human-robot teaming framework. The right explanation provides the rationale
behind an AI agent's decision-making. However, to maintain the human teammate's
cognitive demand to comprehend the provided explanations, prior works have
focused on providing explanations in a specific order or intertwining the
explanation generation with plan execution. Moreover, these approaches do not
consider the degree of details required to share throughout the provided
explanations. In this work, we argue that the agent-generated explanations,
especially the complex ones, should be abstracted to be aligned with the level
of details the human teammate desires to maintain the recipient's cognitive
load. Therefore, learning a hierarchical explanations model is a challenging
task. Moreover, the agent needs to follow a consistent high-level policy to
transfer the learned teammate preferences to a new scenario while lower-level
detailed plans are different. Our evaluation confirmed the process of
understanding an explanation, especially a complex and detailed explanation, is
hierarchical. The human preference that reflected this aspect corresponded
exactly to creating and employing abstraction for knowledge assimilation hidden
deeper in our cognitive process. We showed that hierarchical explanations
achieved better task performance and behavior interpretability while reduced
cognitive load. These results shed light on designing explainable agents
utilizing reinforcement learning and planning across various domains.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - A Closer Look at Reward Decomposition for High-Level Robotic
Explanations [18.019811754800767]
We propose an explainable Q-Map learning framework that combines reward decomposition with abstracted action spaces.
We demonstrate the effectiveness of our framework through quantitative and qualitative analysis of two robotic scenarios.
arXiv Detail & Related papers (2023-04-25T16:01:42Z) - GANterfactual-RL: Understanding Reinforcement Learning Agents'
Strategies through Visual Counterfactual Explanations [0.7874708385247353]
We propose a novel but simple method to generate counterfactual explanations for RL agents.
Our method is fully model-agnostic and we demonstrate that it outperforms the only previous method in several computational metrics.
arXiv Detail & Related papers (2023-02-24T15:29:43Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Tell me why! -- Explanations support learning of relational and causal
structure [24.434551113103105]
Explanations play a considerable role in human learning, especially in areas that remain major challenges for AI.
We show that reinforcement learning agents might likewise benefit from explanations.
Our results suggest that learning from explanations is a powerful principle that could offer a promising path towards training more robust and general machine learning systems.
arXiv Detail & Related papers (2021-12-07T15:09:06Z) - What Did You Think Would Happen? Explaining Agent Behaviour Through
Intended Outcomes [30.056732656973637]
We present a novel form of explanation for Reinforcement Learning, based around the notion of intended outcome.
These explanations describe the outcome an agent is trying to achieve by its actions.
We provide a simple proof that general methods for post-hoc explanations of this nature are impossible in traditional reinforcement learning.
arXiv Detail & Related papers (2020-11-10T12:05:08Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Order Matters: Generating Progressive Explanations for Planning Tasks in
Human-Robot Teaming [11.35869940310993]
We aim to investigate effects during explanation generation when an explanation is broken into multiple parts that are communicated sequentially.
We first evaluate our approach on a scavenger-hunt domain to demonstrate its effectively capturing the humans' preferences.
Results confirmed our hypothesis that the process of understanding an explanation was a dynamic process.
arXiv Detail & Related papers (2020-04-16T00:17:02Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.