Bridging Information Gaps in Dialogues With Grounded Exchanges Using Knowledge Graphs
- URL: http://arxiv.org/abs/2408.01088v2
- Date: Sun, 11 Aug 2024 17:51:21 GMT
- Title: Bridging Information Gaps in Dialogues With Grounded Exchanges Using Knowledge Graphs
- Authors: Phillip Schneider, Nektarios Machner, Kristiina Jokinen, Florian Matthes,
- Abstract summary: We study the potential of large language models for conversational grounding.
Our approach involves annotating human conversations across five knowledge domains to create a new dialogue corpus called BridgeKG.
Our findings offer insights into how these models use in-context learning for conversational grounding tasks and common prediction errors.
- Score: 4.449835214520727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge models are fundamental to dialogue systems for enabling conversational interactions, which require handling domain-specific knowledge. Ensuring effective communication in information-providing conversations entails aligning user understanding with the knowledge available to the system. However, dialogue systems often face challenges arising from semantic inconsistencies in how information is expressed in natural language compared to how it is represented within the system's internal knowledge. To address this problem, we study the potential of large language models for conversational grounding, a mechanism to bridge information gaps by establishing shared knowledge between dialogue participants. Our approach involves annotating human conversations across five knowledge domains to create a new dialogue corpus called BridgeKG. Through a series of experiments on this dataset, we empirically evaluate the capabilities of large language models in classifying grounding acts and identifying grounded information items within a knowledge graph structure. Our findings offer insights into how these models use in-context learning for conversational grounding tasks and common prediction errors, which we illustrate with examples from challenging dialogues. We discuss how the models handle knowledge graphs as a semantic layer between unstructured dialogue utterances and structured information items.
Related papers
- Towards Harnessing Large Language Models for Comprehension of Conversational Grounding [1.8434042562191812]
This study investigates the capabilities of large language models in classifying dialogue turns related to explicit or implicit grounding and predicting grounded knowledge elements.
Our experimental results reveal challenges encountered by large language models in the two tasks.
These initiatives aim to develop more effective dialogue systems that are better equipped to handle the intricacies of grounded knowledge in conversations.
arXiv Detail & Related papers (2024-06-03T19:34:39Z) - PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model [79.64376762489164]
PK-Chat is a Pointer network guided generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs.
The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge.
Based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences.
arXiv Detail & Related papers (2023-04-02T18:23:13Z) - Position Matters! Empirical Study of Order Effect in Knowledge-grounded
Dialogue [54.98184262897166]
We investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses.
We propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input.
arXiv Detail & Related papers (2023-02-12T10:13:00Z) - Knowledge-grounded Dialog State Tracking [12.585986197627477]
We propose to perform dialog state tracking grounded on knowledge encoded externally.
We query relevant knowledge of various forms based on the dialog context.
We demonstrate superior performance of our proposed method over strong baselines.
arXiv Detail & Related papers (2022-10-13T01:34:08Z) - HybriDialogue: An Information-Seeking Dialogue Dataset Grounded on
Tabular and Textual Data [87.67278915655712]
We present a new dialogue dataset, HybriDialogue, which consists of crowdsourced natural conversations grounded on both Wikipedia text and tables.
The conversations are created through the decomposition of complex multihop questions into simple, realistic multiturn dialogue interactions.
arXiv Detail & Related papers (2022-04-28T00:52:16Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
arXiv Detail & Related papers (2021-03-04T15:50:17Z) - GraphDialog: Integrating Graph Knowledge into End-to-End Task-Oriented
Dialogue Systems [9.560436630775762]
End-to-end task-oriented dialogue systems aim to generate system responses directly from plain text inputs.
One is how to effectively incorporate external knowledge bases (KBs) into the learning framework; the other is how to accurately capture the semantics of dialogue history.
We address these two challenges by exploiting the graph structural information in the knowledge base and in the dependency parsing tree of the dialogue.
arXiv Detail & Related papers (2020-10-04T00:04:40Z) - Structured Attention for Unsupervised Dialogue Structure Induction [110.12561786644122]
We propose to incorporate structured attention layers into a Variational Recurrent Neural Network (VRNN) model with discrete latent states to learn dialogue structure in an unsupervised fashion.
Compared to a vanilla VRNN, structured attention enables a model to focus on different parts of the source sentence embeddings while enforcing a structural inductive bias.
arXiv Detail & Related papers (2020-09-17T23:07:03Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.