A Means-End Account of Explainable Artificial Intelligence
- URL: http://arxiv.org/abs/2208.04638v1
- Date: Tue, 9 Aug 2022 09:57:42 GMT
- Title: A Means-End Account of Explainable Artificial Intelligence
- Authors: Oliver Buchholz
- Abstract summary: XAI seeks opaque explanations for machine learning methods.
Authors disagree on what should be explained (topic), whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable artificial intelligence (XAI) seeks to produce explanations for
those machine learning methods which are deemed opaque. However, there is
considerable disagreement about what this means and how to achieve it. Authors
disagree on what should be explained (topic), to whom something should be
explained (stakeholder), how something should be explained (instrument), and
why something should be explained (goal). In this paper, I employ insights from
means-end epistemology to structure the field. According to means-end
epistemology, different means ought to be rationally adopted to achieve
different epistemic ends. Applied to XAI, different topics, stakeholders, and
goals thus require different instruments. I call this the means-end account of
XAI. The means-end account has a descriptive and a normative component: on the
one hand, I show how the specific means-end relations give rise to a taxonomy
of existing contributions to the field of XAI; on the other hand, I argue that
the suitability of XAI methods can be assessed by analyzing whether they are
prescribed by a given topic, stakeholder, and goal.
Related papers
- Dataset | Mindset = Explainable AI | Interpretable AI [36.001670039529586]
"explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs.
We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset.
We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
arXiv Detail & Related papers (2024-08-22T14:12:53Z) - Forms of Understanding of XAI-Explanations [2.887772793510463]
This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
arXiv Detail & Related papers (2023-11-15T08:06:51Z) - How Do Transformers Learn Topic Structure: Towards a Mechanistic
Understanding [56.222097640468306]
We provide mechanistic understanding of how transformers learn "semantic structure"
We show, through a combination of mathematical analysis and experiments on Wikipedia data, that the embedding layer and the self-attention layer encode the topical structure.
arXiv Detail & Related papers (2023-03-07T21:42:17Z) - Social Construction of XAI: Do We Need One Definition to Rule Them All? [18.14698948294366]
We argue why a singular definition of XAI is neither feasible nor desirable at this stage of XAI's development.
Forcing a standardization (closure) on the pluralistic interpretations too early can stifle innovation and lead to premature conclusions.
We share how we can leverage the pluralism to make progress in XAI without having to wait for a definitional consensus.
arXiv Detail & Related papers (2022-11-11T22:32:26Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI [2.5899040911480173]
We explore the features of explanations and how to use those features in evaluating their utility.
We focus on the requirements for explanations defined by their functional role, the knowledge states of users who are trying to understand them, and the availability of the information needed to generate them.
arXiv Detail & Related papers (2022-06-27T21:42:53Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Explainable AI: current status and future directions [11.92436948211501]
Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI)
XAI can explain how AI obtained a particular solution and can also answer other "wh" questions.
This paper provides an overview of these techniques from a multimedia (i.e., text, image, audio, and video) point of view.
arXiv Detail & Related papers (2021-07-12T08:42:19Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.