Sources of Hallucination by Large Language Models on Inference Tasks
- URL: http://arxiv.org/abs/2305.14552v2
- Date: Sun, 22 Oct 2023 21:22:29 GMT
- Title: Sources of Hallucination by Large Language Models on Inference Tasks
- Authors: Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark
Johnson, Mark Steedman
- Abstract summary: Large Language Models (LLMs) are claimed to be capable of Natural Language Inference (NLI)
We present a series of behavioral studies on several LLM families which probe their behavior using controlled experiments.
- Score: 16.644096408742325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) are claimed to be capable of Natural Language
Inference (NLI), necessary for applied tasks like question answering and
summarization. We present a series of behavioral studies on several LLM
families (LLaMA, GPT-3.5, and PaLM) which probe their behavior using controlled
experiments. We establish two biases originating from pretraining which predict
much of their behavior, and show that these are major sources of hallucination
in generative LLMs. First, memorization at the level of sentences: we show
that, regardless of the premise, models falsely label NLI test samples as
entailing when the hypothesis is attested in training data, and that entities
are used as ``indices'' to access the memorized data. Second, statistical
patterns of usage learned at the level of corpora: we further show a similar
effect when the premise predicate is less frequent than that of the hypothesis
in the training data, a bias following from previous studies. We demonstrate
that LLMs perform significantly worse on NLI test samples which do not conform
to these biases than those which do, and we offer these as valuable controls
for future LLM evaluation.
Related papers
- Hypothesis-only Biases in Large Language Model-Elicited Natural Language Inference [3.0804372027733202]
We recreate a portion of the Stanford NLI corpus using GPT-4, Llama-2 and Mistral 7b.
We train hypothesis-only classifiers to determine whether LLM-elicited hypotheses contain annotation artifacts.
Our analysis provides empirical evidence that well-attested biases in NLI can persist in LLM-generated data.
arXiv Detail & Related papers (2024-10-11T17:09:22Z) - What Do Language Models Learn in Context? The Structured Task Hypothesis [89.65045443150889]
Large language models (LLMs) learn a novel task from in-context examples presented in a demonstration, termed in-context learning (ICL)
One popular hypothesis explains ICL by task selection.
Another popular hypothesis is that ICL is a form of meta-learning, i.e., the models learn a learning algorithm at pre-training time and apply it to the demonstration.
arXiv Detail & Related papers (2024-06-06T16:15:34Z) - Analyzing LLM Behavior in Dialogue Summarization: Unveiling Circumstantial Hallucination Trends [38.86240794422485]
We evaluate the faithfulness of large language models for dialogue summarization.
Our evaluation reveals subtleties as to what constitutes a hallucination.
We introduce two prompt-based approaches for fine-grained error detection that outperform existing metrics.
arXiv Detail & Related papers (2024-06-05T17:49:47Z) - Detecting Hallucinations in Large Language Model Generation: A Token Probability Approach [0.0]
Large Language Models (LLMs) produce inaccurate outputs, also known as hallucinations.
This paper introduces a supervised learning approach employing only four numerical features derived from tokens and vocabulary probabilities obtained from other evaluators.
The method yields promising results, surpassing state-of-the-art outcomes in multiple tasks across three different benchmarks.
arXiv Detail & Related papers (2024-05-30T03:00:47Z) - Understanding Privacy Risks of Embeddings Induced by Large Language Models [75.96257812857554]
Large language models show early signs of artificial general intelligence but struggle with hallucinations.
One promising solution is to store external knowledge as embeddings, aiding LLMs in retrieval-augmented generation.
Recent studies experimentally showed that the original text can be partially reconstructed from text embeddings by pre-trained language models.
arXiv Detail & Related papers (2024-04-25T13:10:48Z) - Exploring Value Biases: How LLMs Deviate Towards the Ideal [57.99044181599786]
Large-Language-Models (LLMs) are deployed in a wide range of applications, and their response has an increasing social impact.
We show that value bias is strong in LLMs across different categories, similar to the results found in human studies.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - Are Large Language Models Reliable Judges? A Study on the Factuality
Evaluation Capabilities of LLMs [8.526956860672698]
Large Language Models (LLMs) have gained immense attention due to their notable emergent capabilities.
This study investigates the potential of LLMs as reliable assessors of factual consistency in summaries generated by text-generation models.
arXiv Detail & Related papers (2023-11-01T17:42:45Z) - ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks [91.55895047448249]
This paper presents ReEval, an LLM-based framework using prompt chaining to perturb the original evidence for generating new test cases.
We implement ReEval using ChatGPT and evaluate the resulting variants of two popular open-domain QA datasets.
Our generated data is human-readable and useful to trigger hallucination in large language models.
arXiv Detail & Related papers (2023-10-19T06:37:32Z) - Large Language Models Are Latent Variable Models: Explaining and Finding
Good Demonstrations for In-Context Learning [104.58874584354787]
In recent years, pre-trained large language models (LLMs) have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning.
This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models.
arXiv Detail & Related papers (2023-01-27T18:59:01Z) - Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text
Correspondence [45.9949173746044]
We show that large-size pre-trained language models (PLMs) do not satisfy the logical negation property (LNP)
We propose a novel intermediate training task, names meaning-matching, designed to directly learn a meaning-text correspondence.
We find that the task enables PLMs to learn lexical semantic information.
arXiv Detail & Related papers (2022-05-08T08:37:36Z) - Masked Language Modeling and the Distributional Hypothesis: Order Word
Matters Pre-training for Little [74.49773960145681]
A possible explanation for the impressive performance of masked language model (MLM)-training is that such models have learned to represent the syntactic structures prevalent in NLP pipelines.
In this paper, we propose a different explanation: pre-trains succeed on downstream tasks almost entirely due to their ability to model higher-order word co-occurrence statistics.
Our results show that purely distributional information largely explains the success of pre-training, and underscore the importance of curating challenging evaluation datasets that require deeper linguistic knowledge.
arXiv Detail & Related papers (2021-04-14T06:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.