Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection
- URL: http://arxiv.org/abs/2108.13686v1
- Date: Tue, 31 Aug 2021 08:53:08 GMT
- Title: Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection
- Authors: Shilei Liu, Xiaofeng Zhao, Bochao Li, Feiliang Ren
- Abstract summary: Knoformer is a dialogue response generation model based on reinforcement learning.
It can automatically select one or more related knowledge from the knowledge pool and does not need knowledge labels during training.
- Score: 1.1633929083694388
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Knowledge-grounded dialogue is a task of generating a fluent and informative
response based on both conversation context and a collection of external
knowledge, in which knowledge selection plays an important role and attracts
more and more research interest. However, most existing models either select
only one knowledge or use all knowledge for responses generation. The former
may lose valuable information in discarded knowledge, while the latter may
bring a lot of noise. At the same time, many approaches need to train the
knowledge selector with knowledge labels that indicate ground-truth knowledge,
but these labels are difficult to obtain and require a large number of manual
annotations. Motivated by these issues, we propose Knoformer, a dialogue
response generation model based on reinforcement learning, which can
automatically select one or more related knowledge from the knowledge pool and
does not need knowledge labels during training. Knoformer is evaluated on two
knowledge-guided conversation datasets, and achieves state-of-the-art
performance.
Related papers
- Graph vs. Sequence: An Empirical Study on Knowledge Forms for
Knowledge-Grounded Dialogue [45.36967792307907]
We conduct a thorough experiment and study on the task to answer three essential questions.
The questions involve the choice of appropriate knowledge form, the degree of mutual effects between knowledge and the model selection, and the few-shot performance of knowledge.
arXiv Detail & Related papers (2023-12-13T03:16:33Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - DialoKG: Knowledge-Structure Aware Task-Oriented Dialogue Generation [9.186215038100904]
We propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model.
Our proposed system views relational knowledge as a knowledge graph and introduces a structure-aware knowledge embedding technique.
An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.
arXiv Detail & Related papers (2022-04-19T22:26:18Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Difference-aware Knowledge Selection for Knowledge-grounded Conversation
Generation [101.48602006200409]
We propose a difference-aware knowledge selection method for multi-turn knowledge-grounded dialogs.
It first computes the difference between the candidate knowledge sentences provided at the current turn and those chosen in the previous turns.
Then, the differential information is fused with or disentangled from the contextual information to facilitate final knowledge selection.
arXiv Detail & Related papers (2020-09-20T07:47:26Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.