MATCH: Engineering Transparent and Controllable Conversational XAI Systems through Composable Building Blocks
- URL: http://arxiv.org/abs/2511.22420v1
- Date: Thu, 27 Nov 2025 12:58:04 GMT
- Title: MATCH: Engineering Transparent and Controllable Conversational XAI Systems through Composable Building Blocks
- Authors: Sebe Vanbrabant, Gustavo Rovelo Ruiz, Davy Vanacken,
- Abstract summary: We present our flow-based approach and a selection of building blocks as MATCH: a framework for engineering Multi-Agent Transparent and Controllable Human-centered systems.<n>This research contributes to the field of (conversational) XAI by facilitating the integration of interpretability into existing interactive systems.
- Score: 0.254890465057467
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While the increased integration of AI technologies into interactive systems enables them to solve an increasing number of tasks, the black-box problem of AI models continues to spread throughout the interactive system as a whole. Explainable AI (XAI) techniques can make AI models more accessible by employing post-hoc methods or transitioning to inherently interpretable models. While this makes individual AI models clearer, the overarching system architecture remains opaque. This challenge not only pertains to standard XAI techniques but also to human examination and conversational XAI approaches that need access to model internals to interpret them correctly and completely. To this end, we propose conceptually representing such interactive systems as sequences of structural building blocks. These include the AI models themselves, as well as control mechanisms grounded in literature. The structural building blocks can then be explained through complementary explanatory building blocks, such as established XAI techniques like LIME and SHAP. The flow and APIs of the structural building blocks form an unambiguous overview of the underlying system, serving as a communication basis for both human and automated agents, thus aligning human and machine interpretability of the embedded AI models. In this paper, we present our flow-based approach and a selection of building blocks as MATCH: a framework for engineering Multi-Agent Transparent and Controllable Human-centered systems. This research contributes to the field of (conversational) XAI by facilitating the integration of interpretability into existing interactive systems.
Related papers
- Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions [95.59915390053588]
This study focuses on Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)<n>We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions)<n>To move beyond XAI's limitations, we propose a four-pronged paradigm shift toward reliable and certified AI development.
arXiv Detail & Related papers (2026-02-27T16:58:27Z) - Emergent, not Immanent: A Baradian Reading of Explainable AI [37.51348424835944]
We argue that interpretations emerge from situated entanglements of the AI model with humans, context, and the interpretative apparatus.<n>We propose design directions for XAI interfaces that support emergent interpretation.
arXiv Detail & Related papers (2026-01-21T14:32:40Z) - Embodied AI: From LLMs to World Models [65.68972714346909]
Embodied Artificial Intelligence (AI) is an intelligent system paradigm for achieving Artificial General Intelligence (AGI)<n>Recent breakthroughs in Large Language Models (LLMs) and World Models (WMs) have drawn significant attention for embodied AI.
arXiv Detail & Related papers (2025-09-24T11:37:48Z) - Explain and Monitor Deep Learning Models for Computer Vision using Obz AI [2.406359246841227]
Obz AI is a comprehensive software ecosystem designed to facilitate state-of-the-art explainability and observability for vision AI systems.<n>Obz AI provides a seamless integration pipeline, from a Python client library to a full-stack analytics dashboard.
arXiv Detail & Related papers (2025-08-25T16:46:21Z) - Composable Building Blocks for Controllable and Transparent Interactive AI Systems [0.8192907805418583]
Black box problem of AI models continues to spread throughout interactive system as a whole.<n>XAI techniques can make AI models more accessible by employing post-hoc methods or transitioning to inherently interpretable models.<n>We propose an approach to represent interactive systems as sequences of structural building blocks.
arXiv Detail & Related papers (2025-06-02T21:10:51Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.<n>The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach [2.0209172586699173]
This paper introduces a novel XAI-integrated Visual Quality Inspection framework.
Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability.
This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications.
arXiv Detail & Related papers (2024-07-16T14:30:24Z) - Towards a general framework for improving the performance of classifiers using XAI methods [0.0]
This paper proposes a framework for automatically improving the performance of pre-trained Deep Learning (DL) classifiers using XAI methods.
We will call auto-encoder-based and encoder-decoder-based, and discuss their key aspects.
arXiv Detail & Related papers (2024-03-15T15:04:20Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainable Artificial Intelligence (XAI): An Engineering Perspective [0.0]
XAI is a set of techniques and methods to convert the so-called black-box AI algorithms to white-box algorithms.
We discuss the stakeholders in XAI and describe the mathematical contours of XAI from engineering perspective.
This work is an exploratory study to identify new avenues of research in the field of XAI.
arXiv Detail & Related papers (2021-01-10T19:49:12Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.