"Mama Always Had a Way of Explaining Things So I Could Understand'': A
Dialogue Corpus for Learning to Construct Explanations
- URL: http://arxiv.org/abs/2209.02508v1
- Date: Tue, 6 Sep 2022 14:00:22 GMT
- Title: "Mama Always Had a Way of Explaining Things So I Could Understand'': A
Dialogue Corpus for Learning to Construct Explanations
- Authors: Henning Wachsmuth, Milad Alshomary
- Abstract summary: We introduce a first corpus of dialogical explanations to enable NLP research on how humans explain.
The corpus consists of 65 transcribed English dialogues from the Wired video series emph5 Levels, explaining 13 topics to five explainees of different proficiency.
- Score: 26.540485804067536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI is more and more pervasive in everyday life, humans have an increasing
demand to understand its behavior and decisions. Most research on explainable
AI builds on the premise that there is one ideal explanation to be found. In
fact, however, everyday explanations are co-constructed in a dialogue between
the person explaining (the explainer) and the specific person being explained
to (the explainee). In this paper, we introduce a first corpus of dialogical
explanations to enable NLP research on how humans explain as well as on how AI
can learn to imitate this process. The corpus consists of 65 transcribed
English dialogues from the Wired video series \emph{5 Levels}, explaining 13
topics to five explainees of different proficiency. All 1550 dialogue turns
have been manually labeled by five independent professionals for the topic
discussed as well as for the dialogue act and the explanation move performed.
We analyze linguistic patterns of explainers and explainees, and we explore
differences across proficiency levels. BERT-based baseline results indicate
that sequence information helps predicting topics, acts, and moves effectively
Related papers
- "Is ChatGPT a Better Explainer than My Professor?": Evaluating the Explanation Capabilities of LLMs in Conversation Compared to a Human Baseline [23.81489190082685]
Explanations form the foundation of knowledge sharing and build upon communication principles, social dynamics, and learning theories.
Our research leverages previous work on explanatory acts, a framework for understanding the different strategies that explainers and explainees employ in a conversation to both explain, understand, and engage with the other party.
With the rise of generative AI in the past year, we hope to better understand the capabilities of Large Language Models (LLMs) and how they can augment expert explainer's capabilities in conversational settings.
arXiv Detail & Related papers (2024-06-26T17:33:51Z) - Modeling the Quality of Dialogical Explanations [21.429245729478918]
We study explanation dialogues in terms of the interactions between the explainer and explainee.
We analyze the interaction flows, comparing them to those appearing in expert dialogues.
We encode the interaction flows using two language models that can handle long inputs.
arXiv Detail & Related papers (2024-03-01T16:49:55Z) - May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability [17.052366688978935]
We investigate if free-form conversations can enhance users' comprehension of static explanations.
We measure the effect of the conversation on participants' ability to choose from three machine learning models.
Our findings highlight the importance of customized model explanations in the format of free-form conversations.
arXiv Detail & Related papers (2023-09-25T09:00:38Z) - Providing personalized Explanations: a Conversational Approach [0.5156484100374058]
We propose an approach for an explainer to communicate personalized explanations to an explainee through having consecutive conversations with the explainee.
We prove that the conversation terminates due to the explainee's justification of the initial claim as long as there exists an explanation for the initial claim that the explainee understands and the explainer is aware of.
arXiv Detail & Related papers (2023-07-21T09:34:41Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - On the Diversity and Limits of Human Explanations [11.44224857047629]
A growing effort in NLP aims to build datasets of human explanations.
Our goal is to provide an overview of diverse types of explanations and human limitations.
arXiv Detail & Related papers (2021-06-22T18:00:07Z) - Paragraph-level Commonsense Transformers with Recurrent Memory [77.4133779538797]
We train a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives.
Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
arXiv Detail & Related papers (2020-10-04T05:24:12Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.