Behaviour Trees for Conversational Explanation Experiences
- URL: http://arxiv.org/abs/2211.06402v1
- Date: Fri, 11 Nov 2022 18:39:38 GMT
- Title: Behaviour Trees for Conversational Explanation Experiences
- Authors: Anjana Wijekoon and David Corsar and Nirmalie Wiratunga
- Abstract summary: This paper focuses on how users interact with an XAI system to fulfil multiple explanation needs satisfied by an explanation strategy.
We model the interactive explanation experience as a dialogue model.
An evaluation with a real-world use case shows that BTs have a number of properties that lend naturally to modelling and capturing explanation experiences.
- Score: 1.5257668132713955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI (XAI) has the potential to make a significant impact on
building trust and improving the satisfaction of users who interact with an AI
system for decision-making. There is an abundance of explanation techniques in
literature to address this need. Recently, it has been shown that a user is
likely to have multiple explanation needs that should be addressed by a
constellation of explanation techniques which we refer to as an explanation
strategy. This paper focuses on how users interact with an XAI system to fulfil
these multiple explanation needs satisfied by an explanation strategy. For this
purpose, the paper introduces the concept of an "explanation experience" - as
episodes of user interactions captured by the XAI system when explaining the
decisions made by its AI system. In this paper, we explore how to enable and
capture explanation experiences through conversational interactions. We model
the interactive explanation experience as a dialogue model. Specifically,
Behaviour Trees (BT) are used to model conversational pathways and chatbot
behaviours. A BT dialogue model is easily personalised by dynamically extending
or modifying it to attend to different user needs and explanation strategies.
An evaluation with a real-world use case shows that BTs have a number of
properties that lend naturally to modelling and capturing explanation
experiences; as compared to traditionally used state transition models.
Related papers
- Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting [43.110187812734864]
We evaluate three types of explanations: visual explanations (saliency maps), natural language explanations, and a combination of both modalities.
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
We also observe that the quality of explanations, that is, how much factually correct information they entail, and how much this aligns with AI correctness, significantly impacts the usefulness of the different explanation types.
arXiv Detail & Related papers (2024-10-16T06:43:02Z) - Measuring User Understanding in Dialogue-based XAI Systems [2.4124106640519667]
State-of-the-art in XAI is still characterized by one-shot, non-personalized and one-way explanations.
In this paper, we measure understanding of users in three phases by asking them to simulate the predictions of the model they are learning about.
We analyze the data to reveal patterns of how the interaction between groups with high vs. low understanding gain differ.
arXiv Detail & Related papers (2024-08-13T15:17:03Z) - Tell me more: Intent Fulfilment Framework for Enhancing User Experiences in Conversational XAI [0.6333053895057925]
This paper explores how different types of explanations collaboratively meet users' XAI needs.
We introduce the Intent Fulfilment Framework (IFF) for creating explanation experiences.
The Explanation Experience Dialogue Model integrates the IFF and "Explanation Followups" to provide users with a conversational interface.
arXiv Detail & Related papers (2024-05-16T21:13:43Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability [17.052366688978935]
We investigate if free-form conversations can enhance users' comprehension of static explanations.
We measure the effect of the conversation on participants' ability to choose from three machine learning models.
Our findings highlight the importance of customized model explanations in the format of free-form conversations.
arXiv Detail & Related papers (2023-09-25T09:00:38Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue
Systems [109.16553492049441]
We propose a novel method to incorporate the knowledge reasoning capability into dialogue systems in a more scalable and generalizable manner.
To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs.
arXiv Detail & Related papers (2022-03-20T17:51:49Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Explanation as a process: user-centric construction of multi-level and
multi-modal explanations [0.34410212782758043]
We present a process-based approach that combines multi-level and multi-modal explanations.
We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model.
arXiv Detail & Related papers (2021-10-07T19:26:21Z) - Learning Reasoning Paths over Semantic Graphs for Video-grounded
Dialogues [73.04906599884868]
We propose a novel framework of Reasoning Paths in Dialogue Context (PDC)
PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer.
Our model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer.
arXiv Detail & Related papers (2021-03-01T07:39:26Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.