Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios
- URL: http://arxiv.org/abs/2207.03214v1
- Date: Thu, 7 Jul 2022 10:40:24 GMT
- Title: Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios
- Authors: Francisco Cruz, Charlotte Young, Richard Dazeley, Peter Vamplew
- Abstract summary: We make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods.
- Score: 1.671353192305391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable artificial intelligence is a research field that tries to provide
more transparency for autonomous intelligent systems. Explainability has been
used, particularly in reinforcement learning and robotic scenarios, to better
understand the robot decision-making process. Previous work, however, has been
widely focused on providing technical explanations that can be better
understood by AI practitioners than non-expert end-users. In this work, we make
use of human-like explanations built from the probability of success to
complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very
little experience with artificial intelligence methods. This paper presents a
user trial to study whether these explanations that focus on the probability an
action has of succeeding in its goal constitute a suitable explanation for
non-expert end-users. The results obtained show that non-expert participants
rate robot explanations that focus on the probability of success higher and
with less variance than technical explanations generated from Q-values, and
also favor counterfactual explanations over standalone explanations.
Related papers
- Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - Explaining Explaining [0.882727051273924]
Explanation is key to people having confidence in high-stakes AI systems.
Machine-learning-based systems can't explain because they are usually black boxes.
We describe a hybrid approach to developing cognitive agents.
arXiv Detail & Related papers (2024-09-26T16:55:44Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Introspection-based Explainable Reinforcement Learning in Episodic and
Non-episodic Scenarios [14.863872352905629]
introspection-based approach can be used in conjunction with reinforcement learning agents to provide probabilities of success.
Introspection-based approach can be used to generate explanations for the actions taken in a non-episodic robotics environment as well.
arXiv Detail & Related papers (2022-11-23T13:05:52Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Explainable AI for Robot Failures: Generating Explanations that Improve
User Assistance in Fault Recovery [19.56670862587773]
We introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts.
We investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model.
arXiv Detail & Related papers (2021-01-05T16:16:39Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z) - Explainable Goal-Driven Agents and Robots -- A Comprehensive Review [13.94373363822037]
The paper reviews approaches on explainable goal-driven intelligent agents and robots.
It focuses on techniques for explaining and communicating agents perceptual functions and cognitive reasoning.
It suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.
arXiv Detail & Related papers (2020-04-21T01:41:20Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Deceptive AI Explanations: Creation and Detection [3.197020142231916]
We investigate how AI models can be used to create and detect deceptive explanations.
As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM.
We evaluate the effect of deceptive explanations on users in an experiment with 200 participants.
arXiv Detail & Related papers (2020-01-21T16:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.