Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
- URL: http://arxiv.org/abs/2504.18483v1
- Date: Fri, 25 Apr 2025 16:47:44 GMT
- Title: Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
- Authors: Leandra Fichtel, Maximilian Spliethöver, Eyke Hüllermeier, Patricia Jimenez, Nils Klowait, Stefan Kopp, Axel-Cyrille Ngonga Ngomo, Amelie Robrecht, Ingrid Scharlau, Lutz Terfloth, Anna-Lisa Vollmer, Henning Wachsmuth,
- Abstract summary: We investigate the ability of large language models to engage as explainers in co-constructive explanation dialogues.<n>Our results indicate some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic.
- Score: 23.97414363081048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research has focused on co-constructive explanation dialogues, where the explainer continuously monitors the explainee's understanding and adapts explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with LLMs, of which some have been instructed to explain a predefined topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results indicate that current LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.
Related papers
- Explainers' Mental Representations of Explainees' Needs in Everyday Explanations [0.0]
In explanations, explainers have mental representations of explainees' developing knowledge and shifting interests regarding the explanandum.
XAI should be able to react to explainees' needs in a similar manner.
This study investigated explainers' mental representations in everyday explanations of technological artifacts.
arXiv Detail & Related papers (2024-11-13T10:53:07Z) - Reasoning with Natural Language Explanations [15.281385727331473]
Explanation constitutes an archetypal feature of human rationality, underpinning learning and generalisation.
An increasing amount of research in Natural Language Inference (NLI) has started reconsidering the role that explanations play in learning and inference.
arXiv Detail & Related papers (2024-10-05T13:15:24Z) - "Is ChatGPT a Better Explainer than My Professor?": Evaluating the Explanation Capabilities of LLMs in Conversation Compared to a Human Baseline [23.81489190082685]
Explanations form the foundation of knowledge sharing and build upon communication principles, social dynamics, and learning theories.
Our research leverages previous work on explanatory acts, a framework for understanding the different strategies that explainers and explainees employ in a conversation to both explain, understand, and engage with the other party.
With the rise of generative AI in the past year, we hope to better understand the capabilities of Large Language Models (LLMs) and how they can augment expert explainer's capabilities in conversational settings.
arXiv Detail & Related papers (2024-06-26T17:33:51Z) - An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models [99.31449616860291]
Modern language models (LMs) can learn to perform new tasks in different ways.
In instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly.
In instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description.
arXiv Detail & Related papers (2024-04-03T19:31:56Z) - Modeling the Quality of Dialogical Explanations [21.429245729478918]
We study explanation dialogues in terms of the interactions between the explainer and explainee.
We analyze the interaction flows, comparing them to those appearing in expert dialogues.
We encode the interaction flows using two language models that can handle long inputs.
arXiv Detail & Related papers (2024-03-01T16:49:55Z) - FaithLM: Towards Faithful Explanations for Large Language Models [67.29893340289779]
Large Language Models (LLMs) have become proficient in addressing complex tasks by leveraging their internal knowledge and reasoning capabilities.
The black-box nature of these models complicates the task of explaining their decision-making processes.
We introduce FaithLM to explain the decision of LLMs with natural language (NL) explanations.
arXiv Detail & Related papers (2024-02-07T09:09:14Z) - Providing personalized Explanations: a Conversational Approach [0.5156484100374058]
We propose an approach for an explainer to communicate personalized explanations to an explainee through having consecutive conversations with the explainee.
We prove that the conversation terminates due to the explainee's justification of the initial claim as long as there exists an explanation for the initial claim that the explainee understands and the explainer is aware of.
arXiv Detail & Related papers (2023-07-21T09:34:41Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.