Explaining Any ML Model? -- On Goals and Capabilities of XAI
- URL: http://arxiv.org/abs/2206.13888v1
- Date: Tue, 28 Jun 2022 11:09:33 GMT
- Title: Explaining Any ML Model? -- On Goals and Capabilities of XAI
- Authors: Moritz Renftle, Holger Trittenbach, Michael Poznic, Reinhard Heil
- Abstract summary: We argue that the goals and capabilities of XAI algorithms are far from being well understood.
We show that users can ask diverse questions, but that only one of them can be answered by current XAI algorithms.
- Score: 2.236663830879273
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: An increasing ubiquity of machine learning (ML) motivates research on
algorithms to explain ML models and their predictions -- so-called eXplainable
Artificial Intelligence (XAI). Despite many survey papers and discussions, the
goals and capabilities of XAI algorithms are far from being well understood. We
argue that this is because of a problematic reasoning scheme in XAI literature:
XAI algorithms are said to complement ML models with desired properties, such
as "interpretability", or "explainability". These properties are in turn
assumed to contribute to a goal, like "trust" in an ML system. But most
properties lack precise definitions and their relationship to such goals is far
from obvious. The result is a reasoning scheme that obfuscates research results
and leaves an important question unanswered: What can one expect from XAI
algorithms? In this article, we clarify the goals and capabilities of XAI
algorithms from a concrete perspective: that of their users. Explaining ML
models is only necessary if users have questions about them. We show that users
can ask diverse questions, but that only one of them can be answered by current
XAI algorithms. Answering this core question can be trivial, difficult or even
impossible, depending on the ML application. Based on these insights, we
outline which capabilities policymakers, researchers and society can reasonably
expect from XAI algorithms.
Related papers
- Explainable AI needs formal notions of explanation correctness [2.1309989863595677]
Machine learning in critical domains such as medicine poses risks and requires regulation.
One requirement is that decisions of ML systems in high-risk applications should be human-understandable.
In its current form, XAI is unfit to provide quality control for ML; it itself needs scrutiny.
arXiv Detail & Related papers (2024-09-22T20:47:04Z) - Dataset | Mindset = Explainable AI | Interpretable AI [36.001670039529586]
"explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs.
We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset.
We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
arXiv Detail & Related papers (2024-08-22T14:12:53Z) - Distance-Restricted Explanations: Theoretical Underpinnings & Efficient Implementation [19.22391463965126]
Some uses of machine learning (ML) involve high-stakes and safety-critical applications.
This paper investigates novel algorithms for scaling up the performance of logic-based explainers.
arXiv Detail & Related papers (2024-05-14T03:42:33Z) - LLMs for XAI: Future Directions for Explaining Explanations [50.87311607612179]
We focus on refining explanations computed using existing XAI algorithms.
Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
arXiv Detail & Related papers (2024-05-09T19:17:47Z) - Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era [77.174117675196]
XAI is being extended towards Large Language Models (LLMs)
This paper analyzes how XAI can benefit LLMs and AI systems.
We introduce 10 strategies, introducing the key techniques for each and discussing their associated challenges.
arXiv Detail & Related papers (2024-03-13T20:25:27Z) - Pyreal: A Framework for Interpretable ML Explanations [51.14710806705126]
Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
arXiv Detail & Related papers (2023-12-20T15:04:52Z) - On Explainability in AI-Solutions: A Cross-Domain Survey [4.394025678691688]
In automatically deriving a system model, AI algorithms learn relations in data that are not detectable for humans.
The more complex a model, the more difficult it is for a human to understand the reasoning for the decisions.
This work provides an extensive survey of literature on this topic, which, to a large part, consists of other surveys.
arXiv Detail & Related papers (2022-10-11T06:21:47Z) - Responsibility: An Example-based Explainable AI approach via Training
Process Inspection [1.4610038284393165]
We present a novel XAI approach that identifies the most responsible training example for a particular decision.
This example can then be shown as an explanation: "this is what I (the AI) learned that led me to do that"
Our results demonstrate that responsibility can help improve accuracy for both human end users and secondary ML models.
arXiv Detail & Related papers (2022-09-07T19:30:01Z) - OmniXAI: A Library for Explainable AI [98.07381528393245]
We introduce OmniXAI, an open-source Python library of eXplainable AI (XAI)
It offers omni-way explainable AI capabilities and various interpretable machine learning techniques.
For practitioners, the library provides an easy-to-use unified interface to generate the explanations for their applications.
arXiv Detail & Related papers (2022-06-01T11:35:37Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.