Explainability in reinforcement learning: perspective and position
- URL: http://arxiv.org/abs/2203.11547v1
- Date: Tue, 22 Mar 2022 09:00:13 GMT
- Title: Explainability in reinforcement learning: perspective and position
- Authors: Agneza Krajna, Mario Brcic, Tomislav Lipic and Juraj Doncevic
- Abstract summary: This paper attempts to give a systematic overview of existing methods in the explainable RL area.
It proposes a novel unified taxonomy, building and expanding on the existing ones.
- Score: 1.299941371793082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) has been embedded into many aspects of people's
daily lives and it has become normal for people to have AI make decisions for
them. Reinforcement learning (RL) models increase the space of solvable
problems with respect to other machine learning paradigms. Some of the most
interesting applications are in situations with non-differentiable expected
reward function, operating in unknown or underdefined environment, as well as
for algorithmic discovery that surpasses performance of any teacher, whereby
agent learns from experimental experience through simple feedback. The range of
applications and their social impact is vast, just to name a few: genomics,
game-playing (chess, Go, etc.), general optimization, financial investment,
governmental policies, self-driving cars, recommendation systems, etc. It is
therefore essential to improve the trust and transparency of RL-based systems
through explanations. Most articles dealing with explainability in artificial
intelligence provide methods that concern supervised learning and there are
very few articles dealing with this in the area of RL. The reasons for this are
the credit assignment problem, delayed rewards, and the inability to assume
that data is independently and identically distributed (i.i.d.). This position
paper attempts to give a systematic overview of existing methods in the
explainable RL area and propose a novel unified taxonomy, building and
expanding on the existing ones. The position section describes pragmatic
aspects of how explainability can be observed. The gap between the parties
receiving and generating the explanation is especially emphasized. To reduce
the gap and achieve honesty and truthfulness of explanations, we set up three
pillars: proactivity, risk attitudes, and epistemological constraints. To this
end, we illustrate our proposal on simple variants of the shortest path
problem.
Related papers
- Identifiable Causal Representation Learning: Unsupervised, Multi-View, and Multi-Environment [10.814585613336778]
Causal representation learning aims to combine the core strengths of machine learning and causality.
This thesis investigates what is possible for CRL without direct supervision, and thus contributes to its theoretical foundations.
arXiv Detail & Related papers (2024-06-19T09:14:40Z) - Bridging the Sim-to-Real Gap from the Information Bottleneck Perspective [38.845882541261645]
We propose a novel privileged knowledge distillation method called the Historical Information Bottleneck (HIB)
HIB learns a privileged knowledge representation from historical trajectories by capturing the underlying changeable dynamic information.
Empirical experiments on both simulated and real-world tasks demonstrate that HIB yields improved generalizability compared to previous methods.
arXiv Detail & Related papers (2023-05-29T07:51:00Z) - Redefining Counterfactual Explanations for Reinforcement Learning:
Overview, Challenges and Opportunities [2.0341936392563063]
Most explanation methods for AI are focused on developers and expert users.
Counterfactual explanations offer users advice on what can be changed in the input for the output of the black-box model to change.
Counterfactuals are user-friendly and provide actionable advice for achieving the desired output from the AI system.
arXiv Detail & Related papers (2022-10-21T09:50:53Z) - Collective eXplainable AI: Explaining Cooperative Strategies and Agent
Contribution in Multiagent Reinforcement Learning with Shapley Values [68.8204255655161]
This study proposes a novel approach to explain cooperative strategies in multiagent RL using Shapley values.
Results could have implications for non-discriminatory decision making, ethical and responsible AI-derived decisions or policy making under fairness constraints.
arXiv Detail & Related papers (2021-10-04T10:28:57Z) - Exploratory State Representation Learning [63.942632088208505]
We propose a new approach called XSRL (eXploratory State Representation Learning) to solve the problems of exploration and SRL in parallel.
On one hand, it jointly learns compact state representations and a state transition estimator which is used to remove unexploitable information from the representations.
On the other hand, it continuously trains an inverse model, and adds to the prediction error of this model a $k$-step learning progress bonus to form the objective of a discovery policy.
arXiv Detail & Related papers (2021-09-28T10:11:07Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Explainable Reinforcement Learning: A Survey [0.0]
Explainable Artificial Intelligence (XAI) has gained increased traction over the last few years.
XAI models exhibit one detrimential characteristic: a performance-transparency trade-off.
This survey attempts to address this gap by offering an overview of Explainable Reinforcement Learning (XRL) methods.
arXiv Detail & Related papers (2020-05-13T10:52:49Z) - Explore, Discover and Learn: Unsupervised Discovery of State-Covering
Skills [155.11646755470582]
'Explore, Discover and Learn' (EDL) is an alternative approach to information-theoretic skill discovery.
We show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.
arXiv Detail & Related papers (2020-02-10T10:49:53Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.