IXAII: An Interactive Explainable Artificial Intelligence Interface for Decision Support Systems
- URL: http://arxiv.org/abs/2506.21310v1
- Date: Thu, 26 Jun 2025 14:28:13 GMT
- Title: IXAII: An Interactive Explainable Artificial Intelligence Interface for Decision Support Systems
- Authors: Pauline Speckmann, Mario Nadj, Christian Janiesch,
- Abstract summary: IXAII offers explanations from four explainable AI methods: LIME, SHAP, Anchors, and DiCE.<n>Our prototype provides tailored views for five user groups and gives users agency over the explanations' content and format.
- Score: 0.13654846342364302
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Although several post-hoc methods for explainable AI have been developed, most are static and neglect the user perspective, limiting their effectiveness for the target audience. In response, we developed the interactive explainable intelligent system called IXAII that offers explanations from four explainable AI methods: LIME, SHAP, Anchors, and DiCE. Our prototype provides tailored views for five user groups and gives users agency over the explanations' content and their format. We evaluated IXAII through interviews with experts and lay users. Our results indicate that IXAII, which provides different explanations with multiple visualization options, is perceived as helpful to increase transparency. By bridging the gaps between explainable AI methods, interactivity, and practical implementation, we provide a novel perspective on AI explanation practices and human-AI interaction.
Related papers
- Measuring User Understanding in Dialogue-based XAI Systems [2.4124106640519667]
State-of-the-art in XAI is still characterized by one-shot, non-personalized and one-way explanations.
In this paper, we measure understanding of users in three phases by asking them to simulate the predictions of the model they are learning about.
We analyze the data to reveal patterns of how the interaction between groups with high vs. low understanding gain differ.
arXiv Detail & Related papers (2024-08-13T15:17:03Z) - Investigating the Role of Explainability and AI Literacy in User Compliance [2.8623940003518156]
We find that users' compliance increases with the introduction of XAI but is also affected by AI literacy.
We also find that the relationships between AI literacy XAI and users' compliance are mediated by the users' mental model of AI.
arXiv Detail & Related papers (2024-06-18T14:28:12Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - XAI for All: Can Large Language Models Simplify Explainable AI? [0.0699049312989311]
"x-[plAIn]" is a new approach to make XAI more accessible to a wider audience through a custom Large Language Model.
Our goal was to design a model that can generate clear, concise summaries of various XAI methods.
Results from our use-case studies show that our model is effective in providing easy-to-understand, audience-specific explanations.
arXiv Detail & Related papers (2024-01-23T21:47:12Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - "Help Me Help the AI": Understanding How Explainability Can Support
Human-AI Interaction [22.00514030715286]
We conducted a study of a real-world AI application via interviews with 20 end-users of Merlin, a bird-identification app.
We found that people express a need for practically useful information that can improve their collaboration with the AI system.
We also assessed end-users' perceptions of existing XAI approaches, finding that they prefer part-based explanations.
arXiv Detail & Related papers (2022-10-02T20:17:11Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Explainable Artificial Intelligence (XAI) for Increasing User Trust in
Deep Reinforcement Learning Driven Autonomous Systems [0.8701566919381223]
We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation.
We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment.
arXiv Detail & Related papers (2021-06-07T16:38:43Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.