Zero-Resource Knowledge-Grounded Dialogue Generation
- URL: http://arxiv.org/abs/2008.12918v2
- Date: Fri, 14 May 2021 17:13:10 GMT
- Title: Zero-Resource Knowledge-Grounded Dialogue Generation
- Authors: Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, Chongyang Tao
- Abstract summary: We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
- Score: 29.357221039484568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While neural conversation models have shown great potentials towards
generating informative and engaging responses via introducing external
knowledge, learning such a model often requires knowledge-grounded dialogues
that are difficult to obtain. To overcome the data challenge and reduce the
cost of building a knowledge-grounded dialogue system, we explore the problem
under a zero-resource setting by assuming no context-knowledge-response triples
are needed for training. To this end, we propose representing the knowledge
that bridges a context and a response and the way that the knowledge is
expressed as latent variables, and devise a variational approach that can
effectively estimate a generation model from a dialogue corpus and a knowledge
corpus that are independent with each other. Evaluation results on three
benchmarks of knowledge-grounded dialogue generation indicate that our model
can achieve comparable performance with state-of-the-art methods that rely on
knowledge-grounded dialogues for training, and exhibits a good generalization
ability over different topics and different datasets.
Related papers
- Bridging Information Gaps in Dialogues With Grounded Exchanges Using Knowledge Graphs [4.449835214520727]
We study the potential of large language models for conversational grounding.
Our approach involves annotating human conversations across five knowledge domains to create a new dialogue corpus called BridgeKG.
Our findings offer insights into how these models use in-context learning for conversational grounding tasks and common prediction errors.
arXiv Detail & Related papers (2024-08-02T08:07:15Z) - Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive
Learning [71.8876256714229]
We propose an entity-based contrastive learning framework for improving the robustness of knowledge-grounded dialogue systems.
Our method achieves new state-of-the-art performance in terms of automatic evaluation scores.
arXiv Detail & Related papers (2024-01-09T05:16:52Z) - Contextual Knowledge Learning For Dialogue Generation [13.671946960656467]
We present a novel approach to context and knowledge weighting as an integral part of model training.
We guide the model training through a Contextual Knowledge Learning process which involves Latent Vectors for context and knowledge.
arXiv Detail & Related papers (2023-05-29T16:54:10Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - A Three-Stage Learning Framework for Low-Resource Knowledge-Grounded
Dialogue Generation [0.9926500244448218]
We propose a novel three-stage learning framework based on weakly supervised learning which benefits from large scale ungrounded dialogues and unstructured knowledge base.
Our approach can outperform other state-of-the-art methods with less training data, and even in zero-resource scenario, our approach still performs well.
arXiv Detail & Related papers (2021-09-09T08:32:02Z) - Knowledge-Grounded Dialogue Generation with Pre-trained Language Models [74.09352261943911]
We study knowledge-grounded dialogue generation with pre-trained language models.
We propose equipping response generation defined by a pre-trained language model with a knowledge selection module.
arXiv Detail & Related papers (2020-10-17T16:49:43Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z) - Low-Resource Knowledge-Grounded Dialogue Generation [74.09352261943913]
We consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available.
We devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
With only 1/8 training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.
arXiv Detail & Related papers (2020-02-24T16:20:32Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.