Modeling the Quality of Dialogical Explanations
- URL: http://arxiv.org/abs/2403.00662v1
- Date: Fri, 1 Mar 2024 16:49:55 GMT
- Title: Modeling the Quality of Dialogical Explanations
- Authors: Milad Alshomary, Felix Lange, Meisam Booshehri, Meghdut Sengupta,
Philipp Cimiano, Henning Wachsmuth
- Abstract summary: We study explanation dialogues in terms of the interactions between the explainer and explainee.
We analyze the interaction flows, comparing them to those appearing in expert dialogues.
We encode the interaction flows using two language models that can handle long inputs.
- Score: 21.429245729478918
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explanations are pervasive in our lives. Mostly, they occur in dialogical
form where an {\em explainer} discusses a concept or phenomenon of interest
with an {\em explainee}. Leaving the explainee with a clear understanding is
not straightforward due to the knowledge gap between the two participants.
Previous research looked at the interaction of explanation moves, dialogue
acts, and topics in successful dialogues with expert explainers. However,
daily-life explanations often fail, raising the question of what makes a
dialogue successful. In this work, we study explanation dialogues in terms of
the interactions between the explainer and explainee and how they correlate
with the quality of explanations in terms of a successful understanding on the
explainee's side. In particular, we first construct a corpus of 399 dialogues
from the Reddit forum {\em Explain Like I am Five} and annotate it for
interaction flows and explanation quality. We then analyze the interaction
flows, comparing them to those appearing in expert dialogues. Finally, we
encode the interaction flows using two language models that can handle long
inputs, and we provide empirical evidence for the effectiveness boost gained
through the encoding in predicting the success of explanation dialogues.
Related papers
- "Is ChatGPT a Better Explainer than My Professor?": Evaluating the Explanation Capabilities of LLMs in Conversation Compared to a Human Baseline [23.81489190082685]
Explanations form the foundation of knowledge sharing and build upon communication principles, social dynamics, and learning theories.
Our research leverages previous work on explanatory acts, a framework for understanding the different strategies that explainers and explainees employ in a conversation to both explain, understand, and engage with the other party.
With the rise of generative AI in the past year, we hope to better understand the capabilities of Large Language Models (LLMs) and how they can augment expert explainer's capabilities in conversational settings.
arXiv Detail & Related papers (2024-06-26T17:33:51Z) - May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability [17.052366688978935]
We investigate if free-form conversations can enhance users' comprehension of static explanations.
We measure the effect of the conversation on participants' ability to choose from three machine learning models.
Our findings highlight the importance of customized model explanations in the format of free-form conversations.
arXiv Detail & Related papers (2023-09-25T09:00:38Z) - Providing personalized Explanations: a Conversational Approach [0.5156484100374058]
We propose an approach for an explainer to communicate personalized explanations to an explainee through having consecutive conversations with the explainee.
We prove that the conversation terminates due to the explainee's justification of the initial claim as long as there exists an explanation for the initial claim that the explainee understands and the explainer is aware of.
arXiv Detail & Related papers (2023-07-21T09:34:41Z) - "Mama Always Had a Way of Explaining Things So I Could Understand'': A
Dialogue Corpus for Learning to Construct Explanations [26.540485804067536]
We introduce a first corpus of dialogical explanations to enable NLP research on how humans explain.
The corpus consists of 65 transcribed English dialogues from the Wired video series emph5 Levels, explaining 13 topics to five explainees of different proficiency.
arXiv Detail & Related papers (2022-09-06T14:00:22Z) - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue
Systems [109.16553492049441]
We propose a novel method to incorporate the knowledge reasoning capability into dialogue systems in a more scalable and generalizable manner.
To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs.
arXiv Detail & Related papers (2022-03-20T17:51:49Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Who Responded to Whom: The Joint Effects of Latent Topics and Discourse
in Conversation Structure [53.77234444565652]
We identify the responding relations in the conversation discourse, which link response utterances to their initiations.
We propose a model to learn latent topics and discourse in word distributions, and predict pairwise initiation-response links.
Experimental results on both English and Chinese conversations show that our model significantly outperforms the previous state of the arts.
arXiv Detail & Related papers (2021-04-17T17:46:00Z) - Learning Reasoning Paths over Semantic Graphs for Video-grounded
Dialogues [73.04906599884868]
We propose a novel framework of Reasoning Paths in Dialogue Context (PDC)
PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer.
Our model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer.
arXiv Detail & Related papers (2021-03-01T07:39:26Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.