Explainable AI for Robot Failures: Generating Explanations that Improve
User Assistance in Fault Recovery
- URL: http://arxiv.org/abs/2101.01625v1
- Date: Tue, 5 Jan 2021 16:16:39 GMT
- Title: Explainable AI for Robot Failures: Generating Explanations that Improve
User Assistance in Fault Recovery
- Authors: Devleena Das, Siddhartha Banerjee, Sonia Chernova
- Abstract summary: We introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts.
We investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model.
- Score: 19.56670862587773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing capabilities of intelligent systems, the integration of
robots in our everyday life is increasing. However, when interacting in such
complex human environments, the occasional failure of robotic systems is
inevitable. The field of explainable AI has sought to make complex-decision
making systems more interpretable but most existing techniques target domain
experts. On the contrary, in many failure cases, robots will require recovery
assistance from non-expert users. In this work, we introduce a new type of
explanation, that explains the cause of an unexpected failure during an agent's
plan execution to non-experts. In order for error explanations to be
meaningful, we investigate what types of information within a set of
hand-scripted explanations are most helpful to non-experts for failure and
solution identification. Additionally, we investigate how such explanations can
be autonomously generated, extending an existing encoder-decoder model, and
generalized across environments. We investigate such questions in the context
of a robot performing a pick-and-place manipulation task in the home
environment. Our results show that explanations capturing the context of a
failure and history of past actions, are the most effective for failure and
solution identification among non-experts. Furthermore, through a second user
evaluation, we verify that our model-generated explanations can generalize to
an unseen office environment, and are just as effective as the hand-scripted
explanations.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Explaining Explaining [0.882727051273924]
Explanation is key to people having confidence in high-stakes AI systems.
Machine-learning-based systems can't explain because they are usually black boxes.
We describe a hybrid approach to developing cognitive agents.
arXiv Detail & Related papers (2024-09-26T16:55:44Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios [1.671353192305391]
We make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods.
arXiv Detail & Related papers (2022-07-07T10:40:24Z) - Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and
Pairwise Ranking to Explain Robot Failures [18.80051800388596]
We introduce a more generalizable semantic explanation framework.
Our framework autonomously captures the semantic information in a scene to produce semantically descriptive explanations.
Our results show that these semantically descriptive explanations significantly improve everyday users' ability to both identify failures and provide assistance for recovery.
arXiv Detail & Related papers (2021-08-08T02:44:23Z) - Teaching the Machine to Explain Itself using Domain Knowledge [4.462334751640166]
Non-technical humans-in-the-loop struggle to comprehend the rationale behind model predictions.
We present JOEL, a neural network-based framework to jointly learn a decision-making task and associated explanations.
We collect the domain feedback from a pool of certified experts and use it to ameliorate the model (human teaching)
arXiv Detail & Related papers (2020-11-27T18:46:34Z) - Explainable AI for System Failures: Generating Explanations that Improve
Human Assistance in Fault Recovery [15.359877013989228]
We develop automated, natural language explanations for failures encountered during an AI agents' plan execution.
These explanations are developed with a focus of helping non-expert users understand different point of failures.
We extend an existing sequence-to-sequence methodology to automatically generate our context-based explanations.
arXiv Detail & Related papers (2020-11-18T17:08:50Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.