The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
- URL: http://arxiv.org/abs/2002.09595v1
- Date: Sat, 22 Feb 2020 01:40:01 GMT
- Title: The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
- Authors: Andr\'es P\'aez
- Abstract summary: I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI.
I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post-hoc interpretability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper I argue that the search for explainable models and
interpretable decisions in AI must be reformulated in terms of the broader
project of offering a pragmatic and naturalistic account of understanding in
AI. Intuitively, the purpose of providing an explanation of a model or a
decision is to make it understandable to its stakeholders. But without a
previous grasp of what it means to say that an agent understands a model or a
decision, the explanatory strategies will lack a well-defined goal. Aside from
providing a clearer objective for XAI, focusing on understanding also allows us
to relax the factivity condition on explanation, which is impossible to fulfill
in many machine learning models, and to focus instead on the pragmatic
conditions that determine the best fit between a model and the methods and
devices deployed to understand it. After an examination of the different types
of understanding discussed in the philosophical and psychological literature, I
conclude that interpretative or approximation models not only provide the best
way to achieve the objectual understanding of a machine learning model, but are
also a necessary condition to achieve post-hoc interpretability. This
conclusion is partly based on the shortcomings of the purely functionalist
approach to post-hoc interpretability that seems to be predominant in most
recent literature.
Related papers
- A Mechanistic Explanatory Strategy for XAI [0.0]
This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems.
According to the mechanistic approach, the explanation of opaque AI systems involves identifying mechanisms that drive decision-making.
This research suggests that a systematic approach to studying model organization can reveal elements that simpler (or ''more modest'') explainability techniques might miss.
arXiv Detail & Related papers (2024-11-02T18:30:32Z) - Causal Abstraction in Model Interpretability: A Compact Survey [5.963324728136442]
causal abstraction provides a principled approach to understanding and explaining the causal mechanisms underlying model behavior.
This survey paper delves into the realm of causal abstraction, examining its theoretical foundations, practical applications, and implications for the field of model interpretability.
arXiv Detail & Related papers (2024-10-26T12:24:28Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Explanatory Pluralism in Explainable AI [0.0]
I chart a taxonomy of types of explanation and the associated XAI methods that can address them.
When we look to expose the inner mechanisms of AI models, we produce Diagnostic-explanations.
When we wish to form stable generalizations of our models, we produce Expectation-explanations.
Finally, when we want to justify the usage of a model, we produce Role-explanations.
arXiv Detail & Related papers (2021-06-26T09:02:06Z) - The Grammar of Interactive Explanatory Model Analysis [7.812073412066698]
We show how different Explanatory Model Analysis (EMA) methods complement each other.
We formalize the grammar of IEMA to describe potential human-model dialogues.
IEMA is implemented in a widely used human-centered open-source software framework.
arXiv Detail & Related papers (2020-05-01T17:12:22Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.