Redefining "Hallucination" in LLMs: Towards a psychology-informed
framework for mitigating misinformation
- URL: http://arxiv.org/abs/2402.01769v1
- Date: Thu, 1 Feb 2024 03:01:11 GMT
- Title: Redefining "Hallucination" in LLMs: Towards a psychology-informed
framework for mitigating misinformation
- Authors: Elijah Berberette, Jack Hutchins, Amir Sadovnik
- Abstract summary: We propose a psychological taxonomy based on cognitive biases and other psychological phenomena.
By leveraging insights from how humans internally resolve similar challenges, we aim to develop strategies to mitigate hallucinations.
- Score: 0.7826806223782052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, large language models (LLMs) have become incredibly popular,
with ChatGPT for example being used by over a billion users. While these models
exhibit remarkable language understanding and logical prowess, a notable
challenge surfaces in the form of "hallucinations." This phenomenon results in
LLMs outputting misinformation in a confident manner, which can lead to
devastating consequences with such a large user base. However, we question the
appropriateness of the term "hallucination" in LLMs, proposing a psychological
taxonomy based on cognitive biases and other psychological phenomena. Our
approach offers a more fine-grained understanding of this phenomenon, allowing
for targeted solutions. By leveraging insights from how humans internally
resolve similar challenges, we aim to develop strategies to mitigate LLM
hallucinations. This interdisciplinary approach seeks to move beyond
conventional terminology, providing a nuanced understanding and actionable
pathways for improvement in LLM reliability.
Related papers
- Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models [0.0]
Hallucinations in large language models (LLMs) present a growing challenge across real-world applications.
We propose a prompt-based framework to systematically trigger and quantify hallucination.
arXiv Detail & Related papers (2025-05-01T14:33:47Z) - Delusions of Large Language Models [62.43923767408462]
Large Language Models often generate factually incorrect but plausible outputs, known as hallucinations.
We identify a more insidious phenomenon, LLM delusion, defined as high belief hallucinations, incorrect outputs with abnormally high confidence, making them harder to detect and mitigate.
arXiv Detail & Related papers (2025-03-09T17:59:16Z) - Combating Multimodal LLM Hallucination via Bottom-Up Holistic Reasoning [151.4060202671114]
multimodal large language models (MLLMs) have shown unprecedented capabilities in advancing vision-language tasks.
This paper introduces a novel bottom-up reasoning framework to address hallucinations in MLLMs.
Our framework systematically addresses potential issues in both visual and textual inputs by verifying and integrating perception-level information with cognition-level commonsense knowledge.
arXiv Detail & Related papers (2024-12-15T09:10:46Z) - A Survey of Hallucination in Large Visual Language Models [48.794850395309076]
The existence of hallucinations has limited the potential and practical effectiveness of LVLM in various fields.
The structure of LVLMs and main causes of hallucination generation are introduced.
The available hallucination evaluation benchmarks for LVLMs are presented.
arXiv Detail & Related papers (2024-10-20T10:58:58Z) - Look Within, Why LLMs Hallucinate: A Causal Perspective [16.874588396996764]
Large language models (LLMs) are a milestone in generative artificial intelligence, achieving significant success in text comprehension and generation tasks.
LLMs suffer from severe hallucination problems, posing significant challenges to the practical applications of LLMs.
We propose a method to intervene in LLMs' self-attention layers and maintain their structures and sizes intact.
arXiv Detail & Related papers (2024-07-14T10:47:44Z) - The Dawn After the Dark: An Empirical Study on Factuality Hallucination
in Large Language Models [134.6697160940223]
hallucination poses great challenge to trustworthy and reliable deployment of large language models.
Three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them.
This work presents a systematic empirical study on LLM hallucination, focused on the the three aspects of hallucination detection, source and mitigation.
arXiv Detail & Related papers (2024-01-06T12:40:45Z) - A Comprehensive Survey of Hallucination Mitigation Techniques in Large
Language Models [7.705767540805267]
Large Language Models (LLMs) continue to advance in their ability to write human-like text.
A key challenge remains around their tendency to hallucinate generating content that appears factual but is ungrounded.
This paper presents a survey of over 32 techniques developed to mitigate hallucination in LLMs.
arXiv Detail & Related papers (2024-01-02T17:56:30Z) - A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions [40.79317187623401]
The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP)
LLMs are prone to hallucination, generating plausible yet nonfactual content.
This phenomenon raises significant concerns over the reliability of LLMs in real-world information retrieval systems.
arXiv Detail & Related papers (2023-11-09T09:25:37Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models [116.01843550398183]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks.
LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
arXiv Detail & Related papers (2023-09-03T16:56:48Z) - Evaluation and Analysis of Hallucination in Large Vision-Language Models [49.19829480199372]
Large Vision-Language Models (LVLMs) have recently achieved remarkable success.
LVLMs are still plagued by the hallucination problem.
Hallucination refers to the information of LVLMs' responses that does not exist in the visual input.
arXiv Detail & Related papers (2023-08-29T08:51:24Z) - Evaluating Object Hallucination in Large Vision-Language Models [122.40337582958453]
This work presents the first systematic study on object hallucination of large vision-language models (LVLMs)
We find that LVLMs tend to generate objects that are inconsistent with the target images in the descriptions.
We propose a polling-based query method called POPE to evaluate the object hallucination.
arXiv Detail & Related papers (2023-05-17T16:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.