Cognitive Mirage: A Review of Hallucinations in Large Language Models
- URL: http://arxiv.org/abs/2309.06794v1
- Date: Wed, 13 Sep 2023 08:33:09 GMT
- Title: Cognitive Mirage: A Review of Hallucinations in Large Language Models
- Authors: Hongbin Ye, Tong Liu, Aijia Zhang, Wei Hua, Weiqiang Jia
- Abstract summary: We present a novel taxonomy of hallucinations from various text generation tasks.
We provide theoretical insights, detection methods and improvement approaches.
As hallucinations garner significant attention, we will maintain updates on relevant research progress.
- Score: 10.86850565303067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As large language models continue to develop in the field of AI, text
generation systems are susceptible to a worrisome phenomenon known as
hallucination. In this study, we summarize recent compelling insights into
hallucinations in LLMs. We present a novel taxonomy of hallucinations from
various text generation tasks, thus provide theoretical insights, detection
methods and improvement approaches. Based on this, future research directions
are proposed. Our contribution are threefold: (1) We provide a detailed and
complete taxonomy for hallucinations appearing in text generation tasks; (2) We
provide theoretical analyses of hallucinations in LLMs and provide existing
detection and improvement methods; (3) We propose several research directions
that can be developed in the future. As hallucinations garner significant
attention from the community, we will maintain updates on relevant research
progress.
Related papers
- VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models [59.05674402770661]
This work introduces VideoHallucer, the first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)
VideoHallucer categorizes hallucinations into two main types: intrinsic and extrinsic, offering further subcategories for detailed analysis.
arXiv Detail & Related papers (2024-06-24T06:21:59Z) - Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback [48.065569871444275]
We propose detecting and mitigating hallucinations in Large Vision Language Models (LVLMs) via fine-grained AI feedback.
We generate a small-size hallucination annotation dataset by proprietary models.
Then, we propose a detect-then-rewrite pipeline to automatically construct preference dataset for training hallucination mitigating model.
arXiv Detail & Related papers (2024-04-22T14:46:10Z) - Can We Catch the Elephant? A Survey of the Evolvement of Hallucination Evaluation on Natural Language Generation [15.67906403625006]
The evaluation system for hallucination is complex and diverse, lacking clear organization.
This survey aims to help researchers identify current limitations in hallucination evaluation and highlight future research directions.
arXiv Detail & Related papers (2024-04-18T09:52:18Z) - A Survey on Hallucination in Large Vision-Language Models [18.540878498840435]
Large Vision-Language Models (LVLMs) have attracted growing attention within the AI landscape for its practical implementation potential.
However, hallucination'', or more specifically, the misalignment between factual visual content and corresponding textual generation, poses a significant challenge of utilizing LVLMs.
We dissect LVLM-related hallucinations in an attempt to establish an overview and facilitate future mitigation.
arXiv Detail & Related papers (2024-02-01T00:33:21Z) - Fine-grained Hallucination Detection and Editing for Language Models [109.56911670376932]
Large language models (LMs) are prone to generate factual errors, which are often called hallucinations.
We introduce a comprehensive taxonomy of hallucinations and argue that hallucinations manifest in diverse forms.
We propose a novel task of automatic fine-grained hallucination detection and construct a new evaluation benchmark, FavaBench.
arXiv Detail & Related papers (2024-01-12T19:02:48Z) - Alleviating Hallucinations of Large Language Models through Induced
Hallucinations [67.35512483340837]
Large language models (LLMs) have been observed to generate responses that include inaccurate or fabricated information.
We propose a simple textitInduce-then-Contrast Decoding (ICD) strategy to alleviate hallucinations.
arXiv Detail & Related papers (2023-12-25T12:32:49Z) - Hallucination Augmented Contrastive Learning for Multimodal Large
Language Model [53.65682783591723]
Multi-modal large language models (MLLMs) have been shown to efficiently integrate natural language with visual information to handle multi-modal tasks.
However, MLLMs still face a fundamental limitation of hallucinations, where they tend to generate erroneous or fabricated information.
In this paper, we address hallucinations in MLLMs from a novel perspective of representation learning.
arXiv Detail & Related papers (2023-12-12T04:05:15Z) - A Survey on Hallucination in Large Language Models: Principles,
Taxonomy, Challenges, and Open Questions [42.007305423982515]
Large language models (LLMs) produce hallucinations, resulting in content inconsistent with real-world facts or user inputs.
This survey aims to provide a thorough and in-depth overview of recent advances in the field of LLM hallucinations.
arXiv Detail & Related papers (2023-11-09T09:25:37Z) - Survey of Hallucination in Natural Language Generation [69.9926849848132]
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies.
Deep learning based generation is prone to hallucinate unintended text, which degrades the system performance.
This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
arXiv Detail & Related papers (2022-02-08T03:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.