Explainable AI for System Failures: Generating Explanations that Improve
Human Assistance in Fault Recovery
- URL: http://arxiv.org/abs/2011.09407v2
- Date: Thu, 19 Nov 2020 13:35:38 GMT
- Title: Explainable AI for System Failures: Generating Explanations that Improve
Human Assistance in Fault Recovery
- Authors: Devleena Das, Siddhartha Banerjee, Sonia Chernova
- Abstract summary: We develop automated, natural language explanations for failures encountered during an AI agents' plan execution.
These explanations are developed with a focus of helping non-expert users understand different point of failures.
We extend an existing sequence-to-sequence methodology to automatically generate our context-based explanations.
- Score: 15.359877013989228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing capabilities of intelligent systems, the integration of
artificial intelligence (AI) and robots in everyday life is increasing.
However, when interacting in such complex human environments, the failure of
intelligent systems, such as robots, can be inevitable, requiring recovery
assistance from users. In this work, we develop automated, natural language
explanations for failures encountered during an AI agents' plan execution.
These explanations are developed with a focus of helping non-expert users
understand different point of failures to better provide recovery assistance.
Specifically, we introduce a context-based information type for explanations
that can both help non-expert users understand the underlying cause of a system
failure, and select proper failure recoveries. Additionally, we extend an
existing sequence-to-sequence methodology to automatically generate our
context-based explanations. By doing so, we are able develop a model that can
generalize context-based explanations over both different failure types and
failure scenarios.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Explaining Explaining [0.882727051273924]
Explanation is key to people having confidence in high-stakes AI systems.
Machine-learning-based systems can't explain because they are usually black boxes.
We describe a hybrid approach to developing cognitive agents.
arXiv Detail & Related papers (2024-09-26T16:55:44Z) - Automated Process Planning Based on a Semantic Capability Model and SMT [50.76251195257306]
In research of manufacturing systems and autonomous robots, the term capability is used for a machine-interpretable specification of a system function.
We present an approach that combines these two topics: starting from a semantic capability model, an AI planning problem is automatically generated.
arXiv Detail & Related papers (2023-12-14T10:37:34Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios [1.671353192305391]
We make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods.
arXiv Detail & Related papers (2022-07-07T10:40:24Z) - Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and
Pairwise Ranking to Explain Robot Failures [18.80051800388596]
We introduce a more generalizable semantic explanation framework.
Our framework autonomously captures the semantic information in a scene to produce semantically descriptive explanations.
Our results show that these semantically descriptive explanations significantly improve everyday users' ability to both identify failures and provide assistance for recovery.
arXiv Detail & Related papers (2021-08-08T02:44:23Z) - Explainable AI for Robot Failures: Generating Explanations that Improve
User Assistance in Fault Recovery [19.56670862587773]
We introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts.
We investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model.
arXiv Detail & Related papers (2021-01-05T16:16:39Z) - Foundations of Explainable Knowledge-Enabled Systems [3.7250420821969827]
We present a historical overview of explainable artificial intelligence systems.
We focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.
We propose new definitions for explanations and explainable knowledge-enabled systems.
arXiv Detail & Related papers (2020-03-17T04:18:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.