Beyond Technocratic XAI: The Who, What & How in Explanation Design
- URL: http://arxiv.org/abs/2508.09231v1
- Date: Tue, 12 Aug 2025 08:17:26 GMT
- Title: Beyond Technocratic XAI: The Who, What & How in Explanation Design
- Authors: Ruchira Dhar, Stephanie Brandl, Ninell Oldenburg, Anders Søgaard,
- Abstract summary: In practice, generating meaningful explanations is a context-dependent task.<n>This paper reframes explanation as a situated design process.<n>We propose a three-part framework for explanation design in XAI.
- Score: 35.987280553106565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of Explainable AI (XAI) offers a wide range of techniques for making complex models interpretable. Yet, in practice, generating meaningful explanations is a context-dependent task that requires intentional design choices to ensure accessibility and transparency. This paper reframes explanation as a situated design process -- an approach particularly relevant for practitioners involved in building and deploying explainable systems. Drawing on prior research and principles from design thinking, we propose a three-part framework for explanation design in XAI: asking Who needs the explanation, What they need explained, and How that explanation should be delivered. We also emphasize the need for ethical considerations, including risks of epistemic inequality, reinforcing social inequities, and obscuring accountability and governance. By treating explanation as a sociotechnical design process, this framework encourages a context-aware approach to XAI that supports effective communication and the development of ethically responsible explanations.
Related papers
- From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI [0.0]
"Explanatory AI" is a paradigm that leverages generative AI capabilities to serve as explanatory partners for human understanding.<n>We develop a conceptual model distinguishing Explanatory AI through narrative communication, adaptive personalization, and progressive disclosure principles.<n>Our findings reveal the practical urgency for AI systems designed for human comprehension rather than algorithmic introspection.
arXiv Detail & Related papers (2025-08-08T14:32:41Z) - Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design [3.386401892906348]
This paper explores potential benefits of incorporating Rhetorical Design into the design of Explainable Artificial Intelligence (XAI) systems.<n>Rhetoric Design offers a useful framework to analyze the communicative role of explanations between AI systems and users.
arXiv Detail & Related papers (2025-05-14T23:57:17Z) - KERAIA: An Adaptive and Explainable Framework for Dynamic Knowledge Representation and Reasoning [46.85451489222176]
KERAIA is a novel framework and software platform for symbolic knowledge engineering.<n>It addresses the persistent challenges of representing, reasoning with, and executing knowledge in dynamic, complex, and context-sensitive environments.
arXiv Detail & Related papers (2025-05-07T10:56:05Z) - The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR [47.06917254695738]
We present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI.<n>The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain.<n>We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject.
arXiv Detail & Related papers (2025-01-09T15:50:02Z) - Integrating Evidence into the Design of XAI and AI-based Decision Support Systems: A Means-End Framework for End-users in Construction [0.1999925939110439]
This paper introduces a theoretical, evidence based means end framework for designing XAI enabled DSS.<n>It focuses on evaluating the strength, relevance, and utility of different types of evidence supporting AI generated explanations.
arXiv Detail & Related papers (2024-12-17T13:02:05Z) - A Context-Sensitive Approach to XAI in Music Performance [0.0]
We propose an Explanatory Pragmatism (EP) framework for XAI in music performance.
EP offers a promising direction for enhancing the transparency and interpretability of AI systems in broad artistic applications.
arXiv Detail & Related papers (2023-09-05T17:43:48Z) - Charting the Sociotechnical Gap in Explainable AI: A Framework to
Address the Gap in XAI [29.33534897830558]
We argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability.
We empirically derive a framework that facilitates systematic charting of the sociotechnical gap.
By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.
arXiv Detail & Related papers (2023-02-01T23:21:45Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - A Methodology and Software Architecture to Support
Explainability-by-Design [0.0]
This paper describes Explainability-by-Design, a holistic methodology characterised by proactive measures to include explanation capability in the design of decision-making systems.
The methodology consists of three phases: (A) Explanation Requirement Analysis, (B) Explanation Technical Design, and (C) Explanation Validation.
It was shown that the approach is tractable in terms of development time, which can be as low as two hours per sentence.
arXiv Detail & Related papers (2022-06-13T15:34:29Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.