Emergent, not Immanent: A Baradian Reading of Explainable AI
- URL: http://arxiv.org/abs/2601.15029v2
- Date: Fri, 23 Jan 2026 13:54:13 GMT
- Title: Emergent, not Immanent: A Baradian Reading of Explainable AI
- Authors: Fabio Morreale, Joan SerrĂ , Yuki Mitsufuji,
- Abstract summary: We argue that interpretations emerge from situated entanglements of the AI model with humans, context, and the interpretative apparatus.<n>We propose design directions for XAI interfaces that support emergent interpretation.
- Score: 37.51348424835944
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explainable AI (XAI) is frequently positioned as a technical problem of revealing the inner workings of an AI model. This position is affected by unexamined onto-epistemological assumptions: meaning is treated as immanent to the model, the explainer is positioned outside the system, and a causal structure is presumed recoverable through computational techniques. In this paper, we draw on Barad's agential realism to develop an alternative onto-epistemology of XAI. We propose that interpretations are material-discursive performances that emerge from situated entanglements of the AI model with humans, context, and the interpretative apparatus. To develop this position, we read a comprehensive set of XAI methods through agential realism and reveal the assumptions and limitations that underpin several of these methods. We then articulate the framework's ethical dimension and propose design directions for XAI interfaces that support emergent interpretation, using a speculative text-to-music interface as a case study.
Related papers
- Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions [95.59915390053588]
This study focuses on Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)<n>We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions)<n>To move beyond XAI's limitations, we propose a four-pronged paradigm shift toward reliable and certified AI development.
arXiv Detail & Related papers (2026-02-27T16:58:27Z) - MATCH: Engineering Transparent and Controllable Conversational XAI Systems through Composable Building Blocks [0.254890465057467]
We present our flow-based approach and a selection of building blocks as MATCH: a framework for engineering Multi-Agent Transparent and Controllable Human-centered systems.<n>This research contributes to the field of (conversational) XAI by facilitating the integration of interpretability into existing interactive systems.
arXiv Detail & Related papers (2025-11-27T12:58:04Z) - Onto-Epistemological Analysis of AI Explanations [8.570570532582446]
We discuss explainable AI (XAI) methods that provide explanations of the models' decision process.<n>We show how seemingly small technical changes to an XAI method may correspond to important differences in the underlying assumptions about explanations.<n>We also highlight the risks of ignoring the underlying onto-epistemological paradigm when choosing an XAI method for a given application.
arXiv Detail & Related papers (2025-10-03T13:36:57Z) - KERAIA: An Adaptive and Explainable Framework for Dynamic Knowledge Representation and Reasoning [46.85451489222176]
KERAIA is a novel framework and software platform for symbolic knowledge engineering.<n>It addresses the persistent challenges of representing, reasoning with, and executing knowledge in dynamic, complex, and context-sensitive environments.
arXiv Detail & Related papers (2025-05-07T10:56:05Z) - A Mechanistic Explanatory Strategy for XAI [0.0]
This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems.<n>The findings suggest that pursuing mechanistic explanations can uncover elements that traditional explainability techniques may overlook.
arXiv Detail & Related papers (2024-11-02T18:30:32Z) - Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features [19.15360328688008]
We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features.
The framework provides an understanding of the model's decision-making process that is both flexible for customization by the user and human-readable.
arXiv Detail & Related papers (2024-08-30T10:52:18Z) - Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI [1.628012064605754]
We introduce a novel metric for measuring semantic continuity in Explainable AI methods and machine learning models.<n>We conduct experiments to observe how incremental changes in input affect the explanations provided by different XAI methods.
arXiv Detail & Related papers (2024-07-17T18:32:41Z) - Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems [37.02462866600066]
Evolutionary computation (EC) offers significant potential to contribute to explainable AI (XAI)
This paper provides an introduction to XAI and reviews current techniques for explaining machine learning models.
We then explore how EC can be leveraged in XAI and examine existing XAI approaches that incorporate EC techniques.
arXiv Detail & Related papers (2024-06-12T02:06:24Z) - Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review [12.38351931894004]
We present the first systematic literature review of explainable methods for safe and trustworthy autonomous driving.
We identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation.
We propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
arXiv Detail & Related papers (2024-02-08T09:08:44Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.