Introspection-based Explainable Reinforcement Learning in Episodic and
Non-episodic Scenarios
- URL: http://arxiv.org/abs/2211.12930v1
- Date: Wed, 23 Nov 2022 13:05:52 GMT
- Title: Introspection-based Explainable Reinforcement Learning in Episodic and
Non-episodic Scenarios
- Authors: Niclas Schroeter, Francisco Cruz, Stefan Wermter
- Abstract summary: introspection-based approach can be used in conjunction with reinforcement learning agents to provide probabilities of success.
Introspection-based approach can be used to generate explanations for the actions taken in a non-episodic robotics environment as well.
- Score: 14.863872352905629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing presence of robotic systems and human-robot environments
in today's society, understanding the reasoning behind actions taken by a robot
is becoming more important. To increase this understanding, users are provided
with explanations as to why a specific action was taken. Among other effects,
these explanations improve the trust of users in their robotic partners. One
option for creating these explanations is an introspection-based approach which
can be used in conjunction with reinforcement learning agents to provide
probabilities of success. These can in turn be used to reason about the actions
taken by the agent in a human-understandable fashion. In this work, this
introspection-based approach is developed and evaluated further on the basis of
an episodic and a non-episodic robotics simulation task. Furthermore, an
additional normalization step to the Q-values is proposed, which enables the
usage of the introspection-based approach on negative and comparatively small
Q-values. Results obtained show the viability of introspection for episodic
robotics tasks and, additionally, that the introspection-based approach can be
used to generate explanations for the actions taken in a non-episodic robotics
environment as well.
Related papers
- A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation [39.87346821309096]
We present an addressee estimation model with improved performance in comparison with the previous SOTA.
We also propose several ways to incorporate explainability and transparency in the aforementioned architecture.
arXiv Detail & Related papers (2024-05-20T13:09:32Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios [1.671353192305391]
We make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods.
arXiv Detail & Related papers (2022-07-07T10:40:24Z) - Causal Robot Communication Inspired by Observational Learning Insights [4.545201807506083]
We discuss the relevance of behavior learning insights for robot intent communication.
We present the first application of these insights for a robot to efficiently communicate its intent by selectively explaining the causal actions in an action sequence.
arXiv Detail & Related papers (2022-03-17T06:43:10Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Integrating Intrinsic and Extrinsic Explainability: The Relevance of
Understanding Neural Networks for Human-Robot Interaction [19.844084722919764]
Explainable artificial intelligence (XAI) can help foster trust in and acceptance of intelligent and autonomous systems.
NICO, an open-source humanoid robot platform, is introduced and how the interaction of intrinsic explanations by the robot itself and extrinsic explanations provided by the environment enable efficient robotic behavior.
arXiv Detail & Related papers (2020-10-09T14:28:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.