Explainable Goal Recognition: A Framework Based on Weight of Evidence
- URL: http://arxiv.org/abs/2303.05622v1
- Date: Thu, 9 Mar 2023 23:27:08 GMT
- Title: Explainable Goal Recognition: A Framework Based on Weight of Evidence
- Authors: Abeer Alshehri, Tim Miller, Mor Vered
- Abstract summary: We introduce and evaluate an eXplainable Goal Recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems.
Our model provides human-centered explanations that answer why? and why not? questions.
Using a human behavioral study to obtain the ground truth from human annotators, we show that the XGR model can successfully generate human-like explanations.
- Score: 9.356870107137093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce and evaluate an eXplainable Goal Recognition (XGR) model that
uses the Weight of Evidence (WoE) framework to explain goal recognition
problems. Our model provides human-centered explanations that answer why? and
why not? questions. We computationally evaluate the performance of our system
over eight different domains. Using a human behavioral study to obtain the
ground truth from human annotators, we further show that the XGR model can
successfully generate human-like explanations. We then report on a study with
60 participants who observe agents playing Sokoban game and then receive
explanations of the goal recognition output. We investigate participants'
understanding obtained by explanations through task prediction, explanation
satisfaction, and trust.
Related papers
- Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach [5.174712539403376]
Goal recognition (GR) involves inferring an agent's unobserved goal from a sequence of observations.
Traditionally, GR has been addressed using 'inference to the best explanation' or abduction.
We introduce and evaluate an explainable model for goal recognition (GR) agents, grounded in the theoretical framework and cognitive processes underlying human behavior explanation.
arXiv Detail & Related papers (2024-09-18T03:30:01Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - A Human-Centered Interpretability Framework Based on Weight of Evidence [26.94750208505883]
We take a human-centered approach to interpretable machine learning.
We propose a list of design principles for machine-generated explanations meaningful to humans.
We show that this method can be adapted to handle high-dimensional, multi-class settings.
arXiv Detail & Related papers (2021-04-27T16:13:35Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Order Matters: Generating Progressive Explanations for Planning Tasks in
Human-Robot Teaming [11.35869940310993]
We aim to investigate effects during explanation generation when an explanation is broken into multiple parts that are communicated sequentially.
We first evaluate our approach on a scavenger-hunt domain to demonstrate its effectively capturing the humans' preferences.
Results confirmed our hypothesis that the process of understanding an explanation was a dynamic process.
arXiv Detail & Related papers (2020-04-16T00:17:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.