iSee: Advancing Multi-Shot Explainable AI Using Case-based Recommendations
- URL: http://arxiv.org/abs/2408.12941v1
- Date: Fri, 23 Aug 2024 09:44:57 GMT
- Title: iSee: Advancing Multi-Shot Explainable AI Using Case-based Recommendations
- Authors: Anjana Wijekoon, Nirmalie Wiratunga, David Corsar, Kyle Martin, Ikechukwu Nkisi-Orji, Chamath Palihawadana, Marta Caro-Martínez, Belen Díaz-Agudo, Derek Bridge, Anne Liret,
- Abstract summary: iSee platform is designed for the intelligent sharing and reuse of explanation experiences.
Case-based Reasoning is used to advance best practices in XAI.
All knowledge generated within the iSee platform is formalised by the iSee ontology for interoperability.
- Score: 0.6774524960721717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Recent findings suggest that a single explainer may not meet the diverse needs of multiple users in an AI system; indeed, even individual users may require multiple explanations. This highlights the necessity for a "multi-shot" approach, employing a combination of explainers to form what we introduce as an "explanation strategy". Tailored to a specific user or a user group, an "explanation experience" describes interactions with personalised strategies designed to enhance their AI decision-making processes. The iSee platform is designed for the intelligent sharing and reuse of explanation experiences, using Case-based Reasoning to advance best practices in XAI. The platform provides tools that enable AI system designers, i.e. design users, to design and iteratively revise the most suitable explanation strategy for their AI system to satisfy end-user needs. All knowledge generated within the iSee platform is formalised by the iSee ontology for interoperability. We use a summative mixed methods study protocol to evaluate the usability and utility of the iSee platform with six design users across varying levels of AI and XAI expertise. Our findings confirm that the iSee platform effectively generalises across applications and its potential to promote the adoption of XAI best practices.
Related papers
- Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - The AI-DEC: A Card-based Design Method for User-centered AI Explanations [20.658833770179903]
We develop a design method, called AI-DEC, that defines four dimensions of AI explanations.
We evaluate this method through co-design sessions with workers in healthcare, finance, and management industries.
We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.
arXiv Detail & Related papers (2024-05-26T22:18:38Z) - Human-AI Interaction in Industrial Robotics: Design and Empirical Evaluation of a User Interface for Explainable AI-Based Robot Program Optimization [5.537321488131869]
We present an Explanation User Interface (XUI) for a state-of-the-art deep learning-based robot program.
XUI provides both naive and expert users with different user experiences depending on their skill level.
arXiv Detail & Related papers (2024-04-30T08:20:31Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - XAI for All: Can Large Language Models Simplify Explainable AI? [0.0699049312989311]
"x-[plAIn]" is a new approach to make XAI more accessible to a wider audience through a custom Large Language Model.
Our goal was to design a model that can generate clear, concise summaries of various XAI methods.
Results from our use-case studies show that our model is effective in providing easy-to-understand, audience-specific explanations.
arXiv Detail & Related papers (2024-01-23T21:47:12Z) - I-CEE: Tailoring Explanations of Image Classification Models to User
Expertise [13.293968260458962]
We present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise.
I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users.
Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions.
arXiv Detail & Related papers (2023-12-19T12:26:57Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - User-Oriented Smart General AI System under Causal Inference [0.0]
General AI system solves a wide range of tasks with high performance in an automated fashion.
The best general AI algorithm designed by one individual is different from that devised by another.
Tacit knowledge depends upon user-specific comprehension of task information and individual model design preferences.
arXiv Detail & Related papers (2021-03-25T08:34:35Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.