Tell me more: Intent Fulfilment Framework for Enhancing User Experiences in Conversational XAI
- URL: http://arxiv.org/abs/2405.10446v1
- Date: Thu, 16 May 2024 21:13:43 GMT
- Title: Tell me more: Intent Fulfilment Framework for Enhancing User Experiences in Conversational XAI
- Authors: Anjana Wijekoon, David Corsar, Nirmalie Wiratunga, Kyle Martin, Pedram Salimi,
- Abstract summary: This paper explores how different types of explanations collaboratively meet users' XAI needs.
We introduce the Intent Fulfilment Framework (IFF) for creating explanation experiences.
The Explanation Experience Dialogue Model integrates the IFF and "Explanation Followups" to provide users with a conversational interface.
- Score: 0.6333053895057925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The evolution of Explainable Artificial Intelligence (XAI) has emphasised the significance of meeting diverse user needs. The approaches to identifying and addressing these needs must also advance, recognising that explanation experiences are subjective, user-centred processes that interact with users towards a better understanding of AI decision-making. This paper delves into the interrelations in multi-faceted XAI and examines how different types of explanations collaboratively meet users' XAI needs. We introduce the Intent Fulfilment Framework (IFF) for creating explanation experiences. The novelty of this paper lies in recognising the importance of "follow-up" on explanations for obtaining clarity, verification and/or substitution. Moreover, the Explanation Experience Dialogue Model integrates the IFF and "Explanation Followups" to provide users with a conversational interface for exploring their explanation needs, thereby creating explanation experiences. Quantitative and qualitative findings from our comparative user study demonstrate the impact of the IFF in improving user engagement, the utility of the AI system and the overall user experience. Overall, we reinforce the principle that "one explanation does not fit all" to create explanation experiences that guide the complex interaction through conversation.
Related papers
- Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Measuring User Understanding in Dialogue-based XAI Systems [2.4124106640519667]
State-of-the-art in XAI is still characterized by one-shot, non-personalized and one-way explanations.
In this paper, we measure understanding of users in three phases by asking them to simulate the predictions of the model they are learning about.
We analyze the data to reveal patterns of how the interaction between groups with high vs. low understanding gain differ.
arXiv Detail & Related papers (2024-08-13T15:17:03Z) - Conceptualizing the Relationship between AI Explanations and User Agency [0.9051087836811617]
We analyze the relationship between agency and explanations through a user-centric lens through case studies and thought experiments.
We find that explanation serves as one of several possible first steps for agency by allowing the user convert forethought to outcome in a more effective manner in future interactions.
arXiv Detail & Related papers (2023-12-05T23:56:05Z) - May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability [17.052366688978935]
We investigate if free-form conversations can enhance users' comprehension of static explanations.
We measure the effect of the conversation on participants' ability to choose from three machine learning models.
Our findings highlight the importance of customized model explanations in the format of free-form conversations.
arXiv Detail & Related papers (2023-09-25T09:00:38Z) - Interactive Explanation with Varying Level of Details in an Explainable
Scientific Literature Recommender System [0.5937476291232802]
We aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences.
We conducted a qualitative user study to investigate the impact of providing interactive explanations with varying level of details on the users' perception of the explainable RS.
arXiv Detail & Related papers (2023-06-09T10:48:04Z) - Behaviour Trees for Conversational Explanation Experiences [1.5257668132713955]
This paper focuses on how users interact with an XAI system to fulfil multiple explanation needs satisfied by an explanation strategy.
We model the interactive explanation experience as a dialogue model.
An evaluation with a real-world use case shows that BTs have a number of properties that lend naturally to modelling and capturing explanation experiences.
arXiv Detail & Related papers (2022-11-11T18:39:38Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue
Systems [109.16553492049441]
We propose a novel method to incorporate the knowledge reasoning capability into dialogue systems in a more scalable and generalizable manner.
To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs.
arXiv Detail & Related papers (2022-03-20T17:51:49Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.