CorefDiffs: Co-referential and Differential Knowledge Flow in Document
Grounded Conversations
- URL: http://arxiv.org/abs/2210.02223v1
- Date: Wed, 5 Oct 2022 13:00:17 GMT
- Title: CorefDiffs: Co-referential and Differential Knowledge Flow in Document
Grounded Conversations
- Authors: Lin Xu, Qixian Zhou, Jinlan Fu, Min-Yen Kan, See-Kiong Ng
- Abstract summary: For document-grounded dialog systems, the inter- and intra-document knowledge relations can be used to model such conversational flows.
We develop a novel Multi-Document Co-Referential Graph (Coref-MDG) to capture the inter-document relationships.
CorefDiffs significantly outperforms the state-of-the-art by 9.5%, 7.4%, and 8.2% on three public benchmarks.
- Score: 31.676679227767917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge-grounded dialog systems need to incorporate smooth transitions
among knowledge selected for generating responses, to ensure that dialog flows
naturally. For document-grounded dialog systems, the inter- and intra-document
knowledge relations can be used to model such conversational flows. We develop
a novel Multi-Document Co-Referential Graph (Coref-MDG) to effectively capture
the inter-document relationships based on commonsense and similarity and the
intra-document co-referential structures of knowledge segments within the
grounding documents. We propose CorefDiffs, a Co-referential and Differential
flow management method, to linearize the static Coref-MDG into conversational
sequence logic. CorefDiffs performs knowledge selection by accounting for
contextual graph structures and the knowledge difference sequences. CorefDiffs
significantly outperforms the state-of-the-art by 9.5\%, 7.4\%, and 8.2\% on
three public benchmarks. This demonstrates that the effective modeling of
co-reference and knowledge difference for dialog flows are critical for
transitions in document-grounded conversation
Related papers
- Raw Text is All you Need: Knowledge-intensive Multi-turn Instruction Tuning for Large Language Model [25.459787361454353]
We present a novel framework named R2S that leverages the CoD-Chain of Dialogue logic to guide large language models (LLMs) in generating knowledge-intensive multi-turn dialogues for instruction tuning.
By integrating raw documents from both open-source datasets and domain-specific web-crawled documents into a benchmark K-BENCH, we cover diverse areas such as Wikipedia (English), Science (Chinese), and Artifacts (Chinese)
arXiv Detail & Related papers (2024-07-03T12:04:10Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - Enhanced Knowledge Selection for Grounded Dialogues via Document
Semantic Graphs [123.50636090341236]
We propose to automatically convert background knowledge documents into document semantic graphs.
Our document semantic graphs preserve sentence-level information through the use of sentence nodes and provide concept connections between sentences.
Our experiments show that our semantic graph-based knowledge selection improves over sentence selection baselines for both the knowledge selection task and the end-to-end response generation task on HollE.
arXiv Detail & Related papers (2022-06-15T04:51:32Z) - Commonsense and Named Entity Aware Knowledge Grounded Dialogue
Generation [20.283091595536835]
We present a novel open-domain dialogue generation model which effectively utilizes the large-scale commonsense and named entity based knowledge.
Our proposed model utilizes a multi-hop attention layer to preserve the most accurate and critical parts of the dialogue history and the associated knowledge.
Empirical results on two benchmark dataset demonstrate that our model significantly outperforms the state-of-the-art methods in terms of both automatic evaluation metrics and human judgment.
arXiv Detail & Related papers (2022-05-27T12:11:40Z) - Open-domain Dialogue Generation Grounded with Dynamic Multi-form
Knowledge Fusion [9.45662259790057]
This paper presents a new dialogue generation model, Dynamic Multi-form Knowledge Fusion based Open-domain Chatt-ing Machine (DMKCM)
DMKCM applies an indexed text (a virtual Knowledge Base) to locate relevant documents as 1st hop and then expands the content of the dialogue and its 1st hop using a commonsense knowledge graph to get apposite triples as 2nd hop.
Experimental results indicate the effectiveness of our method in terms of dialogue coherence and informativeness.
arXiv Detail & Related papers (2022-04-24T10:32:48Z) - DIALKI: Knowledge Identification in Conversational Systems through
Dialogue-Document Contextualization [41.21012318918167]
We introduce a knowledge identification model that leverages the document structure to provide dialogue-contextualized passage encodings.
We demonstrate the effectiveness of our model on two document-grounded conversational datasets.
arXiv Detail & Related papers (2021-09-10T05:40:37Z) - A Compare Aggregate Transformer for Understanding Document-grounded
Dialogue [27.04964963480175]
We propose a Compare Aggregate Transformer (CAT) to jointly denoise the dialogue context and aggregate the document information for response generation.
Experimental results on the CMUDoG dataset show that the proposed CAT model outperforms the state-of-the-art approach and strong baselines.
arXiv Detail & Related papers (2020-10-01T03:44:44Z) - Detecting and Classifying Malevolent Dialogue Responses: Taxonomy, Data
and Methodology [68.8836704199096]
Corpus-based conversational interfaces are able to generate more diverse and natural responses than template-based or retrieval-based agents.
With their increased generative capacity of corpusbased conversational agents comes the need to classify and filter out malevolent responses.
Previous studies on the topic of recognizing and classifying inappropriate content are mostly focused on a certain category of malevolence.
arXiv Detail & Related papers (2020-08-21T22:43:27Z) - DCR-Net: A Deep Co-Interactive Relation Network for Joint Dialog Act
Recognition and Sentiment Classification [77.59549450705384]
In dialog system, dialog act recognition and sentiment classification are two correlative tasks.
Most of the existing systems either treat them as separate tasks or just jointly model the two tasks.
We propose a Deep Co-Interactive Relation Network (DCR-Net) to explicitly consider the cross-impact and model the interaction between the two tasks.
arXiv Detail & Related papers (2020-08-16T14:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.