Do Androids Know They're Only Dreaming of Electric Sheep?
- URL: http://arxiv.org/abs/2312.17249v2
- Date: Sat, 8 Jun 2024 05:15:57 GMT
- Title: Do Androids Know They're Only Dreaming of Electric Sheep?
- Authors: Sky CH-Wang, Benjamin Van Durme, Jason Eisner, Chris Kedzie,
- Abstract summary: We design probes trained on the internal representations of a transformer language model to predict its hallucinatory behavior.
Our probes are narrowly trained and we find that they are sensitive to their training domain.
We find that probing is a feasible and efficient alternative to language model hallucination evaluation when model states are available.
- Score: 45.513432353811474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We design probes trained on the internal representations of a transformer language model to predict its hallucinatory behavior on three grounded generation tasks. To train the probes, we annotate for span-level hallucination on both sampled (organic) and manually edited (synthetic) reference outputs. Our probes are narrowly trained and we find that they are sensitive to their training domain: they generalize poorly from one task to another or from synthetic to organic hallucinations. However, on in-domain data, they can reliably detect hallucinations at many transformer layers, achieving 95% of their peak performance as early as layer 4. Here, probing proves accurate for evaluating hallucination, outperforming several contemporary baselines and even surpassing an expert human annotator in response-level detection F1. Similarly, on span-level labeling, probes are on par or better than the expert annotator on two out of three generation tasks. Overall, we find that probing is a feasible and efficient alternative to language model hallucination evaluation when model states are available.
Related papers
- HalluLens: LLM Hallucination Benchmark [49.170128733508335]
Large language models (LLMs) often generate responses that deviate from user input or training data, a phenomenon known as "hallucination"
This paper introduces a comprehensive hallucination benchmark, incorporating both new extrinsic and existing intrinsic evaluation tasks.
arXiv Detail & Related papers (2025-04-24T13:40:27Z) - Pre-Training Multimodal Hallucination Detectors with Corrupted Grounding Data [4.636499986218049]
Multimodal language models can exhibit hallucinations in their outputs, which limits their reliability.
We propose an approach to improve the sample efficiency of these models by creating corrupted grounding data.
arXiv Detail & Related papers (2024-08-30T20:11:00Z) - Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability [83.0884072598828]
Hallucinations come in many forms, and there is no universally accepted definition.
We focus on studying only those hallucinations where a correct answer appears verbatim in the training set.
We find that for a fixed dataset, larger and longer-trained LMs hallucinate less.
While we see detector size improves performance on fixed LM's outputs, we find an inverse relationship between the scale of the LM and the detectability of its hallucinations.
arXiv Detail & Related papers (2024-08-14T23:34:28Z) - ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models [65.12177400764506]
Large language models (LLMs) exhibit hallucinations in long-form question-answering tasks across various domains and wide applications.
Current hallucination detection and mitigation datasets are limited in domains and sizes.
This paper introduces an iterative self-training framework that simultaneously and progressively scales up the hallucination annotation dataset.
arXiv Detail & Related papers (2024-07-05T17:56:38Z) - Mitigating Large Language Model Hallucination with Faithful Finetuning [46.33663932554782]
Large language models (LLMs) have demonstrated remarkable performance on various natural language processing tasks.
They are prone to generating fluent yet untruthful responses, known as "hallucinations"
arXiv Detail & Related papers (2024-06-17T07:16:07Z) - AutoHallusion: Automatic Generation of Hallucination Benchmarks for Vision-Language Models [91.78328878860003]
Large vision-language models (LVLMs) are prone to hallucinations.
benchmarks often rely on hand-crafted corner cases whose failure patterns may not generalize well.
We develop AutoHallusion, the first automated benchmark generation approach.
arXiv Detail & Related papers (2024-06-16T11:44:43Z) - Fine-grained Hallucination Detection and Editing for Language Models [109.56911670376932]
Large language models (LMs) are prone to generate factual errors, which are often called hallucinations.
We introduce a comprehensive taxonomy of hallucinations and argue that hallucinations manifest in diverse forms.
We propose a novel task of automatic fine-grained hallucination detection and construct a new evaluation benchmark, FavaBench.
arXiv Detail & Related papers (2024-01-12T19:02:48Z) - On Early Detection of Hallucinations in Factual Question Answering [4.76359068115052]
hallucinations remain a major impediment towards gaining user trust.
In this work, we explore if the artifacts associated with the model generations can provide hints that the generation will contain hallucinations.
Our results show that the distributions of these artifacts tend to differ between hallucinated and non-hallucinated generations.
arXiv Detail & Related papers (2023-12-19T14:35:04Z) - Plausible May Not Be Faithful: Probing Object Hallucination in
Vision-Language Pre-training [66.0036211069513]
Large-scale vision-language pre-trained models are prone to hallucinate non-existent visual objects when generating text.
We show that models achieving better scores on standard metrics could hallucinate objects more frequently.
Surprisingly, we find that patch-based features perform the best and smaller patch resolution yields a non-trivial reduction in object hallucination.
arXiv Detail & Related papers (2022-10-14T10:27:22Z) - Looking for a Needle in a Haystack: A Comprehensive Study of
Hallucinations in Neural Machine Translation [17.102338932907294]
We set foundations for the study of NMT hallucinations.
We propose DeHallucinator, a simple method for alleviating hallucinations at test time.
arXiv Detail & Related papers (2022-08-10T12:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.