Controlled Hallucinations: Learning to Generate Faithfully from Noisy
Data
- URL: http://arxiv.org/abs/2010.05873v1
- Date: Mon, 12 Oct 2020 17:25:02 GMT
- Title: Controlled Hallucinations: Learning to Generate Faithfully from Noisy
Data
- Authors: Katja Filippova
- Abstract summary: We present a technique to treat such hallucinations as a controllable aspect of the generated text.
On the WikiBio corpus, a particularly noisy dataset, we demonstrate the efficacy of the technique both in an automatic and in a human evaluation.
- Score: 1.0914300987810126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural text generation (data- or text-to-text) demonstrates remarkable
performance when training data is abundant which for many applications is not
the case. To collect a large corpus of parallel data, heuristic rules are often
used but they inevitably let noise into the data, such as phrases in the output
which cannot be explained by the input. Consequently, models pick up on the
noise and may hallucinate--generate fluent but unsupported text. Our
contribution is a simple but powerful technique to treat such hallucinations as
a controllable aspect of the generated text, without dismissing any input and
without modifying the model architecture. On the WikiBio corpus (Lebret et al.,
2016), a particularly noisy dataset, we demonstrate the efficacy of the
technique both in an automatic and in a human evaluation.
Related papers
- Pre-Training Multimodal Hallucination Detectors with Corrupted Grounding Data [4.636499986218049]
Multimodal language models can exhibit hallucinations in their outputs, which limits their reliability.
We propose an approach to improve the sample efficiency of these models by creating corrupted grounding data.
arXiv Detail & Related papers (2024-08-30T20:11:00Z) - Text2Data: Low-Resource Data Generation with Textual Control [104.38011760992637]
Natural language serves as a common and straightforward control signal for humans to interact seamlessly with machines.
We propose Text2Data, a novel approach that utilizes unlabeled data to understand the underlying data distribution through an unsupervised diffusion model.
It undergoes controllable finetuning via a novel constraint optimization-based learning objective that ensures controllability and effectively counteracts catastrophic forgetting.
arXiv Detail & Related papers (2024-02-08T03:41:39Z) - Generating Enhanced Negatives for Training Language-Based Object Detectors [86.1914216335631]
We propose to leverage the vast knowledge built into modern generative models to automatically build negatives that are more relevant to the original data.
Specifically, we use large-language-models to generate negative text descriptions, and text-to-image diffusion models to also generate corresponding negative images.
Our experimental analysis confirms the relevance of the generated negative data, and its use in language-based detectors improves performance on two complex benchmarks.
arXiv Detail & Related papers (2023-12-29T23:04:00Z) - Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text
Generation [5.304395026626743]
Hallucination of text ungrounded in the input is a well-known problem in neural data-to-text generation.
We propose a new way to mitigate hallucinations by combining the probabilistic output of a generator language model with the output of a special "text critic"
Our method does not need any changes to the underlying LM's architecture or training procedure.
arXiv Detail & Related papers (2023-10-25T20:05:07Z) - Reducing Hallucinations in Neural Machine Translation with Feature
Attribution [54.46113444757899]
We present a case study focusing on model understanding and regularisation to reduce hallucinations in NMT.
We first use feature attribution methods to study the behaviour of an NMT model that produces hallucinations.
We then leverage these methods to propose a novel loss function that substantially helps reduce hallucinations and does not require retraining the model from scratch.
arXiv Detail & Related papers (2022-11-17T20:33:56Z) - A Token-level Reference-free Hallucination Detection Benchmark for
Free-form Text Generation [50.55448707570669]
We propose a novel token-level, reference-free hallucination detection task and an associated annotated dataset named HaDes.
To create this dataset, we first perturb a large number of text segments extracted from English language Wikipedia, and then verify these with crowd-sourced annotations.
arXiv Detail & Related papers (2021-04-18T04:09:48Z) - Controlling Hallucinations at Word Level in Data-to-Text Generation [10.59137381324694]
State-of-art neural models include misleading statements in their outputs.
We propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance.
Our model is able to reduce and control hallucinations, while keeping fluency and coherence in generated texts.
arXiv Detail & Related papers (2021-02-04T18:58:28Z) - Detecting Hallucinated Content in Conditional Neural Sequence Generation [165.68948078624499]
We propose a task to predict whether each token in the output sequence is hallucinated (not contained in the input)
We also introduce a method for learning to detect hallucinations using pretrained language models fine tuned on synthetic data.
arXiv Detail & Related papers (2020-11-05T00:18:53Z) - Unsupervised Opinion Summarization with Noising and Denoising [85.49169453434554]
We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof.
At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.
arXiv Detail & Related papers (2020-04-21T16:54:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.