Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue
- URL: http://arxiv.org/abs/2002.07510v2
- Date: Tue, 16 Jun 2020 02:04:57 GMT
- Title: Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue
- Authors: Byeongchang Kim, Jaewoo Ahn, Gunhee Kim
- Abstract summary: We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
- Score: 51.513276162736844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge-grounded dialogue is a task of generating an informative response
based on both discourse context and external knowledge. As we focus on better
modeling the knowledge selection in the multi-turn knowledge-grounded dialogue,
we propose a sequential latent variable model as the first approach to this
matter. The model named sequential knowledge transformer (SKT) can keep track
of the prior and posterior distribution over knowledge; as a result, it can not
only reduce the ambiguity caused from the diversity in knowledge selection of
conversation but also better leverage the response information for proper
choice of knowledge. Our experimental results show that the proposed model
improves the knowledge selection accuracy and subsequently the performance of
utterance generation. We achieve the new state-of-the-art performance on Wizard
of Wikipedia (Dinan et al., 2019) as one of the most large-scale and
challenging benchmarks. We further validate the effectiveness of our model over
existing conversation methods in another knowledge-based dialogue Holl-E
dataset (Moghe et al., 2018).
Related papers
- CET2: Modelling Topic Transitions for Coherent and Engaging
Knowledge-Grounded Conversations [44.32118148085158]
Knowledge-grounded dialogue systems aim to generate coherent and engaging responses based on the dialogue contexts and selected external knowledge.
Previous knowledge selection methods tend to rely too heavily on the dialogue contexts or over-emphasize the new information in the selected knowledge.
We introduce a Coherent and Engaging Topic Transition framework to model topic transitions for selecting knowledge coherent to the context of the conversations.
arXiv Detail & Related papers (2024-03-04T08:55:34Z) - Position Matters! Empirical Study of Order Effect in Knowledge-grounded
Dialogue [54.98184262897166]
We investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses.
We propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input.
arXiv Detail & Related papers (2023-02-12T10:13:00Z) - There Is No Standard Answer: Knowledge-Grounded Dialogue Generation with
Adversarial Activated Multi-Reference Learning [29.093220439736527]
Knowledge-grounded conversation (KGC) shows excellent potential to deliver an engaging and informative response.
Existing approaches emphasize selecting one golden knowledge given a particular dialogue context, overlooking the one-to-many phenomenon in dialogue.
We propose a series of metrics to systematically assess the one-to-many efficacy of existing KGC models.
arXiv Detail & Related papers (2022-10-22T14:43:33Z) - KAT: A Knowledge Augmented Transformer for Vision-and-Language [56.716531169609915]
We propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result on the open-domain multimodal task of OK-VQA.
Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation.
An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
arXiv Detail & Related papers (2021-12-16T04:37:10Z) - GKS: Graph-based Knowledge Selector for Task-oriented Dialog System [0.688204255655161]
Graph-Knowledge Selector (GKS) outperforms SOTA models proposed in the data-set on knowledge selection from the 9th Dialog System Technology Challenges (DSTC9)
GKS makes knowledge selection decisions in the dialog by simultaneously considering each knowledge embedding generated from the language model, without sequential features.
arXiv Detail & Related papers (2021-12-07T14:16:26Z) - Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection [1.1633929083694388]
Knoformer is a dialogue response generation model based on reinforcement learning.
It can automatically select one or more related knowledge from the knowledge pool and does not need knowledge labels during training.
arXiv Detail & Related papers (2021-08-31T08:53:08Z) - Retrieval-Free Knowledge-Grounded Dialogue Response Generation with
Adapters [52.725200145600624]
We propose KnowExpert to bypass the retrieval process by injecting prior knowledge into the pre-trained language models with lightweight adapters.
Experimental results show that KnowExpert performs comparably with the retrieval-based baselines.
arXiv Detail & Related papers (2021-05-13T12:33:23Z) - Difference-aware Knowledge Selection for Knowledge-grounded Conversation
Generation [101.48602006200409]
We propose a difference-aware knowledge selection method for multi-turn knowledge-grounded dialogs.
It first computes the difference between the candidate knowledge sentences provided at the current turn and those chosen in the previous turns.
Then, the differential information is fused with or disentangled from the contextual information to facilitate final knowledge selection.
arXiv Detail & Related papers (2020-09-20T07:47:26Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.