Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR
- URL: http://arxiv.org/abs/2110.00758v1
- Date: Sat, 2 Oct 2021 08:48:47 GMT
- Title: Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR
- Authors: Francesco Sovrano, Fabio Vitali, Monica Palmirani
- Abstract summary: ExplanatorY AI (YAI) builds over XAI with the goal to collect and organize explainable information.
We represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space.
- Score: 2.578242050187029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The European Union (EU) through the High-Level Expert Group on Artificial
Intelligence (AI-HLEG) and the General Data Protection Regulation (GDPR) has
recently posed an interesting challenge to the eXplainable AI (XAI) community,
by demanding a more user-centred approach to explain Automated Decision-Making
systems (ADMs). Looking at the relevant literature, XAI is currently focused on
producing explainable software and explanations that generally follow an
approach we could term One-Size-Fits-All, that is unable to meet a requirement
of centring on user needs. One of the causes of this limit is the belief that
making things explainable alone is enough to have pragmatic explanations. Thus,
insisting on a clear separation between explainabilty (something that can be
explained) and explanations, we point to explanatorY AI (YAI) as an alternative
and more powerful approach to win the AI-HLEG challenge. YAI builds over XAI
with the goal to collect and organize explainable information, articulating it
into something we called user-centred explanatory discourses. Through the use
of explanatory discourses/narratives we represent the problem of generating
explanations for Automated Decision-Making systems (ADMs) into the
identification of an appropriate path over an explanatory space, allowing
explainees to interactively explore it and produce the explanation best suited
to their needs.
Related papers
- Dataset | Mindset = Explainable AI | Interpretable AI [36.001670039529586]
"explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs.
We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset.
We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
arXiv Detail & Related papers (2024-08-22T14:12:53Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI [2.5899040911480173]
We explore the features of explanations and how to use those features in evaluating their utility.
We focus on the requirements for explanations defined by their functional role, the knowledge states of users who are trying to understand them, and the availability of the information needed to generate them.
arXiv Detail & Related papers (2022-06-27T21:42:53Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Levels of explainable artificial intelligence for human-aligned
conversational explanations [0.6571063542099524]
People are affected by autonomous decisions every day and need to understand the decision-making process to accept the outcomes.
This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system.
arXiv Detail & Related papers (2021-07-07T12:19:16Z) - Explanatory Pluralism in Explainable AI [0.0]
I chart a taxonomy of types of explanation and the associated XAI methods that can address them.
When we look to expose the inner mechanisms of AI models, we produce Diagnostic-explanations.
When we wish to form stable generalizations of our models, we produce Expectation-explanations.
Finally, when we want to justify the usage of a model, we produce Role-explanations.
arXiv Detail & Related papers (2021-06-26T09:02:06Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Machine Reasoning Explainability [100.78417922186048]
Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning.
Studies in early MR have notably started inquiries into Explainable AI (XAI)
This document reports our work in-progress on MR explainability.
arXiv Detail & Related papers (2020-09-01T13:45:05Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.