There Is No Standard Answer: Knowledge-Grounded Dialogue Generation with
Adversarial Activated Multi-Reference Learning
- URL: http://arxiv.org/abs/2210.12459v1
- Date: Sat, 22 Oct 2022 14:43:33 GMT
- Title: There Is No Standard Answer: Knowledge-Grounded Dialogue Generation with
Adversarial Activated Multi-Reference Learning
- Authors: Xueliang Zhao, Tingchen Fu, Chongyang Tao and Rui Yan
- Abstract summary: Knowledge-grounded conversation (KGC) shows excellent potential to deliver an engaging and informative response.
Existing approaches emphasize selecting one golden knowledge given a particular dialogue context, overlooking the one-to-many phenomenon in dialogue.
We propose a series of metrics to systematically assess the one-to-many efficacy of existing KGC models.
- Score: 29.093220439736527
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge-grounded conversation (KGC) shows excellent potential to deliver an
engaging and informative response. However, existing approaches emphasize
selecting one golden knowledge given a particular dialogue context, overlooking
the one-to-many phenomenon in dialogue. As a result, the existing paradigm
limits the diversity of knowledge selection and generation. To this end, we
establish a multi-reference KGC dataset and propose a series of metrics to
systematically assess the one-to-many efficacy of existing KGC models.
Furthermore, to extend the hypothesis space of knowledge selection to enhance
the mapping relationship between multiple knowledge and multiple responses, we
devise a span-based variational model and optimize the model in a wake-sleep
style with an ameliorated evidence lower bound objective to learn the
one-to-many generalization. Both automatic and human evaluations demonstrate
the efficacy of our approach.
Related papers
- UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for
Personalized Dialogue Systems [44.893215129952395]
Large Language Models (LLMs) has shown exceptional capabilities in many natual language understanding and generation tasks.
We decompose the use of multiple sources in generating personalized response into three sub-tasks: Knowledge Source Selection, Knowledge Retrieval, and Response Generation.
We propose a novel Unified Multi-Source Retrieval-Augmented Generation system (UniMS-RAG)
arXiv Detail & Related papers (2024-01-24T06:50:20Z) - DialCLIP: Empowering CLIP as Multi-Modal Dialog Retriever [83.33209603041013]
We propose a parameter-efficient prompt-tuning method named DialCLIP for multi-modal dialog retrieval.
Our approach introduces a multi-modal context generator to learn context features which are distilled into prompts within the pre-trained vision-language model CLIP.
To facilitate various types of retrieval, we also design multiple experts to learn mappings from CLIP outputs to multi-modal representation space.
arXiv Detail & Related papers (2024-01-02T07:40:12Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - 'What are you referring to?' Evaluating the Ability of Multi-Modal
Dialogue Models to Process Clarificational Exchanges [65.03196674816772]
Referential ambiguities arise in dialogue when a referring expression does not uniquely identify the intended referent for the addressee.
Addressees usually detect such ambiguities immediately and work with the speaker to repair it using meta-communicative, Clarification Exchanges (CE): a Clarification Request (CR) and a response.
Here, we argue that the ability to generate and respond to CRs imposes specific constraints on the architecture and objective functions of multi-modal, visually grounded dialogue models.
arXiv Detail & Related papers (2023-07-28T13:44:33Z) - Achieving Conversational Goals with Unsupervised Post-hoc Knowledge
Injection [37.15893335147598]
A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses.
We propose a post-hoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model.
We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step.
arXiv Detail & Related papers (2022-03-22T00:42:27Z) - Dialogue Response Selection with Hierarchical Curriculum Learning [52.3318584971562]
We study the learning of a matching model for dialogue response selection.
Motivated by the recent finding that random negatives are often too trivial to train a reliable model, we propose a hierarchical curriculum learning framework.
arXiv Detail & Related papers (2020-12-29T14:06:41Z) - Enhancing Dialogue Generation via Multi-Level Contrastive Learning [57.005432249952406]
We propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query.
A Rank-aware (RC) network is designed to construct the multi-level contrastive optimization objectives.
We build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words.
arXiv Detail & Related papers (2020-09-19T02:41:04Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.