Explainable AI for System Failures: Generating Explanations that Improve
Human Assistance in Fault Recovery
- URL: http://arxiv.org/abs/2011.09407v2
- Date: Thu, 19 Nov 2020 13:35:38 GMT
- Title: Explainable AI for System Failures: Generating Explanations that Improve
Human Assistance in Fault Recovery
- Authors: Devleena Das, Siddhartha Banerjee, Sonia Chernova
- Abstract summary: We develop automated, natural language explanations for failures encountered during an AI agents' plan execution.
These explanations are developed with a focus of helping non-expert users understand different point of failures.
We extend an existing sequence-to-sequence methodology to automatically generate our context-based explanations.
- Score: 15.359877013989228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing capabilities of intelligent systems, the integration of
artificial intelligence (AI) and robots in everyday life is increasing.
However, when interacting in such complex human environments, the failure of
intelligent systems, such as robots, can be inevitable, requiring recovery
assistance from users. In this work, we develop automated, natural language
explanations for failures encountered during an AI agents' plan execution.
These explanations are developed with a focus of helping non-expert users
understand different point of failures to better provide recovery assistance.
Specifically, we introduce a context-based information type for explanations
that can both help non-expert users understand the underlying cause of a system
failure, and select proper failure recoveries. Additionally, we extend an
existing sequence-to-sequence methodology to automatically generate our
context-based explanations. By doing so, we are able develop a model that can
generalize context-based explanations over both different failure types and
failure scenarios.
Related papers
- Automated Process Planning Based on a Semantic Capability Model and SMT [50.76251195257306]
In research of manufacturing systems and autonomous robots, the term capability is used for a machine-interpretable specification of a system function.
We present an approach that combines these two topics: starting from a semantic capability model, an AI planning problem is automatically generated.
arXiv Detail & Related papers (2023-12-14T10:37:34Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - REFLECT: Summarizing Robot Experiences for Failure Explanation and
Correction [28.015693808520496]
REFLECT is a framework which queries Large Language Models for failure reasoning based on a hierarchical summary of robot past experiences.
We show that REFLECT is able to generate informative failure explanations that assist successful correction planning.
arXiv Detail & Related papers (2023-06-27T18:03:15Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios [1.671353192305391]
We make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods.
arXiv Detail & Related papers (2022-07-07T10:40:24Z) - Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and
Pairwise Ranking to Explain Robot Failures [18.80051800388596]
We introduce a more generalizable semantic explanation framework.
Our framework autonomously captures the semantic information in a scene to produce semantically descriptive explanations.
Our results show that these semantically descriptive explanations significantly improve everyday users' ability to both identify failures and provide assistance for recovery.
arXiv Detail & Related papers (2021-08-08T02:44:23Z) - Explainable AI for Robot Failures: Generating Explanations that Improve
User Assistance in Fault Recovery [19.56670862587773]
We introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts.
We investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model.
arXiv Detail & Related papers (2021-01-05T16:16:39Z) - Foundations of Explainable Knowledge-Enabled Systems [3.7250420821969827]
We present a historical overview of explainable artificial intelligence systems.
We focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.
We propose new definitions for explanations and explainable knowledge-enabled systems.
arXiv Detail & Related papers (2020-03-17T04:18:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.