Increasing Faithfulness in Knowledge-Grounded Dialogue with Controllable
Features
- URL: http://arxiv.org/abs/2107.06963v1
- Date: Wed, 14 Jul 2021 19:52:12 GMT
- Title: Increasing Faithfulness in Knowledge-Grounded Dialogue with Controllable
Features
- Authors: Hannah Rashkin, David Reitter, Gaurav Singh Tomar, Dipanjan Das
- Abstract summary: We discuss the challenges of training a generative neural dialogue model for such systems that is controlled to stay faithful to the evidence.
Existing datasets contain a mix of conversational responses that are faithful to selected evidence as well as more subjective or chit-chat style responses.
We propose different evaluation measures to disentangle these different styles of responses by quantifying the informativeness and objectivity.
- Score: 16.676172815172166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge-grounded dialogue systems are intended to convey information that
is based on evidence provided in a given source text. We discuss the challenges
of training a generative neural dialogue model for such systems that is
controlled to stay faithful to the evidence. Existing datasets contain a mix of
conversational responses that are faithful to selected evidence as well as more
subjective or chit-chat style responses. We propose different evaluation
measures to disentangle these different styles of responses by quantifying the
informativeness and objectivity. At training time, additional inputs based on
these evaluation measures are given to the dialogue model. At generation time,
these additional inputs act as stylistic controls that encourage the model to
generate responses that are faithful to the provided evidence. We also
investigate the usage of additional controls at decoding time using resampling
techniques. In addition to automatic metrics, we perform a human evaluation
study where raters judge the output of these controlled generation models to be
generally more objective and faithful to the evidence compared to baseline
dialogue systems.
Related papers
- PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Controllable Mixed-Initiative Dialogue Generation through Prompting [50.03458333265885]
Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control.
Agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner.
Standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents.
We instead prompt large language models as a drop-in replacement to fine-tuning on conditional generation.
arXiv Detail & Related papers (2023-05-06T23:11:25Z) - Position Matters! Empirical Study of Order Effect in Knowledge-grounded
Dialogue [54.98184262897166]
We investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses.
We propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input.
arXiv Detail & Related papers (2023-02-12T10:13:00Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Achieving Conversational Goals with Unsupervised Post-hoc Knowledge
Injection [37.15893335147598]
A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses.
We propose a post-hoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model.
We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step.
arXiv Detail & Related papers (2022-03-22T00:42:27Z) - Evaluating Groundedness in Dialogue Systems: The BEGIN Benchmark [29.722504033424382]
Knowledge-grounded dialogue agents are systems designed to conduct a conversation based on externally provided background information, such as a Wikipedia page.
We introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN)
BEGIN consists of 8113 dialogue turns generated by language-model-based dialogue systems, accompanied by humans annotations specifying the relationship between the system's response and the background information.
arXiv Detail & Related papers (2021-04-30T20:17:52Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z) - A Controllable Model of Grounded Response Generation [122.7121624884747]
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process.
We propose a framework that we call controllable grounded response generation (CGRG)
We show that using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
arXiv Detail & Related papers (2020-05-01T21:22:08Z) - Low-Resource Knowledge-Grounded Dialogue Generation [74.09352261943913]
We consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available.
We devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
With only 1/8 training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.
arXiv Detail & Related papers (2020-02-24T16:20:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.