Diverse and Faithful Knowledge-Grounded Dialogue Generation via
Sequential Posterior Inference
- URL: http://arxiv.org/abs/2306.01153v2
- Date: Sat, 5 Aug 2023 05:48:41 GMT
- Title: Diverse and Faithful Knowledge-Grounded Dialogue Generation via
Sequential Posterior Inference
- Authors: Yan Xu, Deqian Kong, Dehong Xu, Ziwei Ji, Bo Pang, Pascale Fung, Ying
Nian Wu
- Abstract summary: We present an end-to-end learning framework, termed Sequential Posterior Inference (SPI), capable of selecting knowledge and generating dialogues.
Unlike other methods, SPI does not require the inference network or assume a simple geometry of the posterior distribution.
- Score: 82.28542500317445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The capability to generate responses with diversity and faithfulness using
factual knowledge is paramount for creating a human-like, trustworthy dialogue
system. Common strategies either adopt a two-step paradigm, which optimizes
knowledge selection and response generation separately, and may overlook the
inherent correlation between these two tasks, or leverage conditional
variational method to jointly optimize knowledge selection and response
generation by employing an inference network. In this paper, we present an
end-to-end learning framework, termed Sequential Posterior Inference (SPI),
capable of selecting knowledge and generating dialogues by approximately
sampling from the posterior distribution. Unlike other methods, SPI does not
require the inference network or assume a simple geometry of the posterior
distribution. This straightforward and intuitive inference procedure of SPI
directly queries the response generation model, allowing for accurate knowledge
selection and generation of faithful responses. In addition to modeling
contributions, our experimental results on two common dialogue datasets (Wizard
of Wikipedia and Holl-E) demonstrate that SPI outperforms previous strong
baselines according to both automatic and human evaluation metrics.
Related papers
- Multi-turn Response Selection with Commonsense-enhanced Language Models [32.921901489497714]
We design a Siamese network where a pre-trained Language model merges with a Graph neural network (SinLG)
SinLG takes advantage of Pre-trained Language Models (PLMs) to catch the word correlations in the context and response candidates.
The GNN aims to assist the PLM in fine-tuning, and arousing its related memories to attain better performance.
arXiv Detail & Related papers (2024-07-26T03:13:47Z) - UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for
Personalized Dialogue Systems [44.893215129952395]
Large Language Models (LLMs) has shown exceptional capabilities in many natual language understanding and generation tasks.
We decompose the use of multiple sources in generating personalized response into three sub-tasks: Knowledge Source Selection, Knowledge Retrieval, and Response Generation.
We propose a novel Unified Multi-Source Retrieval-Augmented Generation system (UniMS-RAG)
arXiv Detail & Related papers (2024-01-24T06:50:20Z) - Learning to Express in Knowledge-Grounded Conversation [62.338124154016825]
We consider two aspects of knowledge expression, namely the structure of the response and style of the content in each part.
We propose a segmentation-based generation model and optimize the model by a variational approach to discover the underlying pattern of knowledge expression in a response.
arXiv Detail & Related papers (2022-04-12T13:43:47Z) - A Template-guided Hybrid Pointer Network for
Knowledge-basedTask-oriented Dialogue Systems [15.654119998970499]
We propose a template-guided hybrid pointer network for the knowledge-based task-oriented dialogue system.
We design a memory pointer network model with a gating mechanism to fully exploit the semantic correlation between the retrieved answers and the ground-truth response.
arXiv Detail & Related papers (2021-06-10T15:49:26Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Diversifying Task-oriented Dialogue Response Generation with Prototype
Guided Paraphrasing [52.71007876803418]
Existing methods for Dialogue Response Generation (DRG) in Task-oriented Dialogue Systems ( TDSs) can be grouped into two categories: template-based and corpus-based.
We propose a prototype-based, paraphrasing neural network, called P2-Net, which aims to enhance quality of the responses in terms of both precision and diversity.
arXiv Detail & Related papers (2020-08-07T22:25:36Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.