Open-domain Dialogue Generation Grounded with Dynamic Multi-form
Knowledge Fusion
- URL: http://arxiv.org/abs/2204.11239v1
- Date: Sun, 24 Apr 2022 10:32:48 GMT
- Title: Open-domain Dialogue Generation Grounded with Dynamic Multi-form
Knowledge Fusion
- Authors: Feifei Xu, Shanlin Zhou, Xinpeng Wang, Yunpu Ma, Wenkai Zhang, Zhisong
Li
- Abstract summary: This paper presents a new dialogue generation model, Dynamic Multi-form Knowledge Fusion based Open-domain Chatt-ing Machine (DMKCM)
DMKCM applies an indexed text (a virtual Knowledge Base) to locate relevant documents as 1st hop and then expands the content of the dialogue and its 1st hop using a commonsense knowledge graph to get apposite triples as 2nd hop.
Experimental results indicate the effectiveness of our method in terms of dialogue coherence and informativeness.
- Score: 9.45662259790057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-domain multi-turn conversations normally face the challenges of how to
enrich and expand the content of the conversation. Recently, many approaches
based on external knowledge are proposed to generate rich semantic and
information conversation. Two types of knowledge have been studied for
knowledge-aware open-domain dialogue generation: structured triples from
knowledge graphs and unstructured texts from documents. To take both advantages
of abundant unstructured latent knowledge in the documents and the information
expansion capabilities of the structured knowledge graph, this paper presents a
new dialogue generation model, Dynamic Multi-form Knowledge Fusion based
Open-domain Chatt-ing Machine (DMKCM).In particular, DMKCM applies an indexed
text (a virtual Knowledge Base) to locate relevant documents as 1st hop and
then expands the content of the dialogue and its 1st hop using a commonsense
knowledge graph to get apposite triples as 2nd hop. To merge these two forms of
knowledge into the dialogue effectively, we design a dynamic virtual knowledge
selector and a controller that help to enrich and expand knowledge space.
Moreover, DMKCM adopts a novel dynamic knowledge memory module that effectively
uses historical reasoning knowledge to generate better responses. Experimental
results indicate the effectiveness of our method in terms of dialogue coherence
and informativeness.
Related papers
- Generative Multi-Modal Knowledge Retrieval with Large Language Models [75.70313858231833]
We propose an innovative end-to-end generative framework for multi-modal knowledge retrieval.
Our framework takes advantage of the fact that large language models (LLMs) can effectively serve as virtual knowledge bases.
We demonstrate significant improvements ranging from 3.0% to 14.6% across all evaluation metrics when compared to strong baselines.
arXiv Detail & Related papers (2024-01-16T08:44:29Z) - Semi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model Reasoning [10.839645156881573]
We introduce a novel semi-structured prompting approach that seamlessly integrates the model's parametric memory with unstructured knowledge from text documents and structured knowledge from knowledge graphs.
Experimental results on open-domain multi-hop question answering datasets demonstrate that our prompting method significantly surpasses existing techniques.
arXiv Detail & Related papers (2023-11-14T19:53:53Z) - Large Language Models as Source Planner for Personalized
Knowledge-grounded Dialogue [72.26474540602517]
SAFARI is a novel framework for planning, understanding, and incorporating under both supervised and unsupervised settings.
We construct a personalized knowledge-grounded dialogue dataset textittextbfKnowledge textbfBehind textbfPersona(textbfKBP)
Experimental results on the KBP dataset demonstrate that the SAFARI framework can effectively produce persona-consistent and knowledge-enhanced responses.
arXiv Detail & Related papers (2023-10-13T03:38:38Z) - KPT: Keyword-guided Pre-training for Grounded Dialog Generation [82.68787152707455]
We propose KPT (guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation.
Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords.
We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages.
arXiv Detail & Related papers (2022-12-04T04:05:01Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - Commonsense and Named Entity Aware Knowledge Grounded Dialogue
Generation [20.283091595536835]
We present a novel open-domain dialogue generation model which effectively utilizes the large-scale commonsense and named entity based knowledge.
Our proposed model utilizes a multi-hop attention layer to preserve the most accurate and critical parts of the dialogue history and the associated knowledge.
Empirical results on two benchmark dataset demonstrate that our model significantly outperforms the state-of-the-art methods in terms of both automatic evaluation metrics and human judgment.
arXiv Detail & Related papers (2022-05-27T12:11:40Z) - DialoKG: Knowledge-Structure Aware Task-Oriented Dialogue Generation [9.186215038100904]
We propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model.
Our proposed system views relational knowledge as a knowledge graph and introduces a structure-aware knowledge embedding technique.
An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.
arXiv Detail & Related papers (2022-04-19T22:26:18Z) - HyKnow: End-to-End Task-Oriented Dialog Modeling with Hybrid Knowledge
Management [58.82499963373537]
We propose a TOD system with hybrid knowledge management, HyKnow.
It extends the belief state to manage both structured and unstructured knowledge.
It is the first end-to-end model that jointly optimize modeling grounded on these two kinds of knowledge.
arXiv Detail & Related papers (2021-05-13T01:58:39Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Knowledge-graph based Proactive Dialogue Generation with Improved
Meta-Learning [0.0]
We propose a knowledge graph based proactive dialogue generation model (KgDg) with three components.
For knowledge triplets embedding and selection, we formulate it as a problem of sentence embedding to better capture semantic information.
Our improved MAML algorithm is capable of learning general features from a limited number of knowledge graphs.
arXiv Detail & Related papers (2020-04-19T08:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.