Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation
- URL: http://arxiv.org/abs/2112.07924v1
- Date: Wed, 15 Dec 2021 07:11:02 GMT
- Title: Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation
- Authors: Yu Li, Baolin Peng, Yelong Shen, Yi Mao, Lars Liden, Zhou Yu, Jianfeng
Gao
- Abstract summary: Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
- Score: 78.85622982191522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge-grounded dialogue systems are challenging to build due to the lack
of training data and heterogeneous knowledge sources. Existing systems perform
poorly on unseen topics due to limited topics covered in the training data. In
addition, heterogeneous knowledge sources make it challenging for systems to
generalize to other tasks because knowledge sources in different knowledge
representations require different knowledge encoders. To address these
challenges, we present PLUG, a language model that homogenizes different
knowledge sources to a unified knowledge representation for knowledge-grounded
dialogue generation tasks. PLUG is pre-trained on a dialogue generation task
conditioned on a unified essential knowledge representation. It can generalize
to different downstream knowledge-grounded dialogue generation tasks with a few
training examples. The empirical evaluation on two benchmarks shows that our
model generalizes well across different knowledge-grounded tasks. It can
achieve comparable performance with state-of-the-art methods under a
fully-supervised setting and significantly outperforms other methods in
zero-shot and few-shot settings.
Related papers
- Large Language Models as Source Planner for Personalized
Knowledge-grounded Dialogue [72.26474540602517]
SAFARI is a novel framework for planning, understanding, and incorporating under both supervised and unsupervised settings.
We construct a personalized knowledge-grounded dialogue dataset textittextbfKnowledge textbfBehind textbfPersona(textbfKBP)
Experimental results on the KBP dataset demonstrate that the SAFARI framework can effectively produce persona-consistent and knowledge-enhanced responses.
arXiv Detail & Related papers (2023-10-13T03:38:38Z) - Position Matters! Empirical Study of Order Effect in Knowledge-grounded
Dialogue [54.98184262897166]
We investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses.
We propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input.
arXiv Detail & Related papers (2023-02-12T10:13:00Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - DialoKG: Knowledge-Structure Aware Task-Oriented Dialogue Generation [9.186215038100904]
We propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model.
Our proposed system views relational knowledge as a knowledge graph and introduces a structure-aware knowledge embedding technique.
An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.
arXiv Detail & Related papers (2022-04-19T22:26:18Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection [1.1633929083694388]
Knoformer is a dialogue response generation model based on reinforcement learning.
It can automatically select one or more related knowledge from the knowledge pool and does not need knowledge labels during training.
arXiv Detail & Related papers (2021-08-31T08:53:08Z) - Understanding Few-Shot Commonsense Knowledge Models [39.31365020474205]
We investigate training commonsense knowledge models in a few-shot setting.
We find that human quality ratings for knowledge produced from a few-shot trained system can achieve performance within 6% of knowledge produced from fully supervised systems.
arXiv Detail & Related papers (2021-01-01T19:01:09Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.