Hallucination is Inevitable: An Innate Limitation of Large Language Models
- URL: http://arxiv.org/abs/2401.11817v2
- Date: Thu, 13 Feb 2025 08:11:25 GMT
- Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models
- Authors: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli,
- Abstract summary: We show that it is impossible to eliminate hallucination in large language models.
Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs.
- Score: 3.4444349898613957
- License:
- Abstract: Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucination is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably hallucinate if used as general problem solvers. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.
Related papers
- LLMs Will Always Hallucinate, and We Need to Live With This [1.3810901729134184]
This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems.
It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms.
arXiv Detail & Related papers (2024-09-09T16:01:58Z) - WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries [64.239202960816]
We introduce WildHallucinations, a benchmark that evaluates factuality.
It does so by prompting large language models to generate information about entities mined from user-chatbot conversations in the wild.
We evaluate 118,785 generations from 15 LLMs on 7,919 entities.
arXiv Detail & Related papers (2024-07-24T17:59:05Z) - Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? [53.89380284760555]
Large vision-language models (LVLMs) produce captions that mention concepts that cannot be found in the image.
These hallucinations erode the trustworthiness of LVLMs and are arguably among the main obstacles to their ubiquitous adoption.
Recent work suggests that addition of grounding objectives -- those that explicitly align image regions or objects to text spans -- reduces the amount of LVLM hallucination.
arXiv Detail & Related papers (2024-06-20T16:56:11Z) - Do LLMs Know about Hallucination? An Empirical Investigation of LLM's
Hidden States [19.343629282494774]
Large Language Models (LLMs) can make up answers that are not real, and this is known as hallucination.
This research aims to see if, how, and to what extent LLMs are aware of hallucination.
arXiv Detail & Related papers (2024-02-15T06:14:55Z) - Hallucination Detection and Hallucination Mitigation: An Investigation [13.941799495842776]
Large language models (LLMs) have achieved remarkable successes over the last two years in a range of different applications.
This report aims to present a comprehensive review of the current literature on both hallucination detection and hallucination mitigation.
arXiv Detail & Related papers (2024-01-16T13:36:07Z) - The Dawn After the Dark: An Empirical Study on Factuality Hallucination
in Large Language Models [134.6697160940223]
hallucination poses great challenge to trustworthy and reliable deployment of large language models.
Three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them.
This work presents a systematic empirical study on LLM hallucination, focused on the the three aspects of hallucination detection, source and mitigation.
arXiv Detail & Related papers (2024-01-06T12:40:45Z) - A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions [40.79317187623401]
The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP)
LLMs are prone to hallucination, generating plausible yet nonfactual content.
This phenomenon raises significant concerns over the reliability of LLMs in real-world information retrieval systems.
arXiv Detail & Related papers (2023-11-09T09:25:37Z) - Analyzing and Mitigating Object Hallucination in Large Vision-Language Models [110.12460299261531]
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages.
LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images.
We propose a powerful algorithm, LVLM Hallucination Revisor (LURE), to rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions.
arXiv Detail & Related papers (2023-10-01T18:10:53Z) - Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models [116.01843550398183]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks.
LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
arXiv Detail & Related papers (2023-09-03T16:56:48Z) - Evaluation and Analysis of Hallucination in Large Vision-Language Models [49.19829480199372]
Large Vision-Language Models (LVLMs) have recently achieved remarkable success.
LVLMs are still plagued by the hallucination problem.
Hallucination refers to the information of LVLMs' responses that does not exist in the visual input.
arXiv Detail & Related papers (2023-08-29T08:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.