The Dawn After the Dark: An Empirical Study on Factuality Hallucination
in Large Language Models
- URL: http://arxiv.org/abs/2401.03205v1
- Date: Sat, 6 Jan 2024 12:40:45 GMT
- Title: The Dawn After the Dark: An Empirical Study on Factuality Hallucination
in Large Language Models
- Authors: Junyi Li, Jie Chen, Ruiyang Ren, Xiaoxue Cheng, Wayne Xin Zhao,
Jian-Yun Nie and Ji-Rong Wen
- Abstract summary: hallucination poses great challenge to trustworthy and reliable deployment of large language models.
Three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them.
This work presents a systematic empirical study on LLM hallucination, focused on the the three aspects of hallucination detection, source and mitigation.
- Score: 134.6697160940223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the era of large language models (LLMs), hallucination (i.e., the tendency
to generate factually incorrect content) poses great challenge to trustworthy
and reliable deployment of LLMs in real-world applications. To tackle the LLM
hallucination, three key questions should be well studied: how to detect
hallucinations (detection), why do LLMs hallucinate (source), and what can be
done to mitigate them (mitigation). To address these challenges, this work
presents a systematic empirical study on LLM hallucination, focused on the the
three aspects of hallucination detection, source and mitigation. Specially, we
construct a new hallucination benchmark HaluEval 2.0, and designs a simple yet
effective detection method for LLM hallucination. Furthermore, we zoom into the
different training or utilization stages of LLMs and extensively analyze the
potential factors that lead to the LLM hallucination. Finally, we implement and
examine a series of widely used techniques to mitigate the hallucinations in
LLMs. Our work has led to several important findings to understand the
hallucination origin and mitigate the hallucinations in LLMs. Our code and data
can be accessed at https://github.com/RUCAIBox/HaluEval-2.0.
Related papers
- MedHalu: Hallucinations in Responses to Healthcare Queries by Large Language Models [26.464489158584463]
We conduct a pioneering study of hallucinations in LLM-generated responses to real-world healthcare queries from patients.
We propose MedHalu, a carefully crafted first-of-its-kind medical hallucination dataset with a diverse range of health-related topics.
We also introduce MedHaluDetect framework to evaluate capabilities of various LLMs in detecting hallucinations.
arXiv Detail & Related papers (2024-09-29T00:09:01Z) - Look Within, Why LLMs Hallucinate: A Causal Perspective [16.874588396996764]
Large language models (LLMs) are a milestone in generative artificial intelligence, achieving significant success in text comprehension and generation tasks.
LLMs suffer from severe hallucination problems, posing significant challenges to the practical applications of LLMs.
We propose a method to intervene in LLMs' self-attention layers and maintain their structures and sizes intact.
arXiv Detail & Related papers (2024-07-14T10:47:44Z) - Do LLMs Know about Hallucination? An Empirical Investigation of LLM's
Hidden States [19.343629282494774]
Large Language Models (LLMs) can make up answers that are not real, and this is known as hallucination.
This research aims to see if, how, and to what extent LLMs are aware of hallucination.
arXiv Detail & Related papers (2024-02-15T06:14:55Z) - Hallucination Detection and Hallucination Mitigation: An Investigation [13.941799495842776]
Large language models (LLMs) have achieved remarkable successes over the last two years in a range of different applications.
This report aims to present a comprehensive review of the current literature on both hallucination detection and hallucination mitigation.
arXiv Detail & Related papers (2024-01-16T13:36:07Z) - A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions [40.79317187623401]
The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP)
LLMs are prone to hallucination, generating plausible yet nonfactual content.
This phenomenon raises significant concerns over the reliability of LLMs in real-world information retrieval systems.
arXiv Detail & Related papers (2023-11-09T09:25:37Z) - Analyzing and Mitigating Object Hallucination in Large Vision-Language Models [110.12460299261531]
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages.
LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images.
We propose a powerful algorithm, LVLM Hallucination Revisor (LURE), to rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions.
arXiv Detail & Related papers (2023-10-01T18:10:53Z) - Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models [116.01843550398183]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks.
LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
arXiv Detail & Related papers (2023-09-03T16:56:48Z) - Evaluation and Analysis of Hallucination in Large Vision-Language Models [49.19829480199372]
Large Vision-Language Models (LVLMs) have recently achieved remarkable success.
LVLMs are still plagued by the hallucination problem.
Hallucination refers to the information of LVLMs' responses that does not exist in the visual input.
arXiv Detail & Related papers (2023-08-29T08:51:24Z) - HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large
Language Models [146.87696738011712]
Large language models (LLMs) are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge.
To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation benchmark for Large Language Models (HaluEval)
arXiv Detail & Related papers (2023-05-19T15:36:27Z) - Evaluating Object Hallucination in Large Vision-Language Models [122.40337582958453]
This work presents the first systematic study on object hallucination of large vision-language models (LVLMs)
We find that LVLMs tend to generate objects that are inconsistent with the target images in the descriptions.
We propose a polling-based query method called POPE to evaluate the object hallucination.
arXiv Detail & Related papers (2023-05-17T16:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.