A Comprehensive Survey of Hallucination Mitigation Techniques in Large
Language Models
- URL: http://arxiv.org/abs/2401.01313v3
- Date: Mon, 8 Jan 2024 16:19:17 GMT
- Title: A Comprehensive Survey of Hallucination Mitigation Techniques in Large
Language Models
- Authors: S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Anku Rani,
Vipula Rawte, Aman Chadha, Amitava Das
- Abstract summary: Large Language Models (LLMs) continue to advance in their ability to write human-like text.
A key challenge remains around their tendency to hallucinate generating content that appears factual but is ungrounded.
This paper presents a survey of over 32 techniques developed to mitigate hallucination in LLMs.
- Score: 7.705767540805267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As Large Language Models (LLMs) continue to advance in their ability to write
human-like text, a key challenge remains around their tendency to hallucinate
generating content that appears factual but is ungrounded. This issue of
hallucination is arguably the biggest hindrance to safely deploying these
powerful LLMs into real-world production systems that impact people's lives.
The journey toward widespread adoption of LLMs in practical settings heavily
relies on addressing and mitigating hallucinations. Unlike traditional AI
systems focused on limited tasks, LLMs have been exposed to vast amounts of
online text data during training. While this allows them to display impressive
language fluency, it also means they are capable of extrapolating information
from the biases in training data, misinterpreting ambiguous prompts, or
modifying the information to align superficially with the input. This becomes
hugely alarming when we rely on language generation capabilities for sensitive
applications, such as summarizing medical records, financial analysis reports,
etc. This paper presents a comprehensive survey of over 32 techniques developed
to mitigate hallucination in LLMs. Notable among these are Retrieval Augmented
Generation (Lewis et al, 2021), Knowledge Retrieval (Varshney et al,2023),
CoNLI (Lei et al, 2023), and CoVe (Dhuliawala et al, 2023). Furthermore, we
introduce a detailed taxonomy categorizing these methods based on various
parameters, such as dataset utilization, common tasks, feedback mechanisms, and
retriever types. This classification helps distinguish the diverse approaches
specifically designed to tackle hallucination issues in LLMs. Additionally, we
analyze the challenges and limitations inherent in these techniques, providing
a solid foundation for future research in addressing hallucinations and related
phenomena within the realm of LLMs.
Related papers
- Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective [5.769786334333616]
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, and others.
They face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses.
This paper discusses these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations.
arXiv Detail & Related papers (2024-11-21T16:09:05Z) - Investigating the Role of Prompting and External Tools in Hallucination Rates of Large Language Models [0.0]
Large Language Models (LLMs) are powerful computational models trained on extensive corpora of human-readable text, enabling them to perform general-purpose language understanding and generation.
Despite these successes, LLMs often produce inaccuracies, commonly referred to as hallucinations.
This paper provides an empirical evaluation of different prompting strategies and frameworks aimed at reducing hallucinations in LLMs.
arXiv Detail & Related papers (2024-10-25T08:34:53Z) - Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models [70.19081534515371]
Large Language Models (LLMs) have gained widespread adoption in various natural language processing tasks.
They generate unfaithful or inconsistent content that deviates from the input source, leading to severe consequences.
We propose a robust discriminator named RelD to effectively detect hallucination in LLMs' generated answers.
arXiv Detail & Related papers (2024-07-04T18:47:42Z) - Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models [68.91592125175787]
Hallucinations pose a significant challenge for the practical implementation of large language models (LLMs)
We present Rowen, a novel approach that enhances LLMs with a selective retrieval augmentation process tailored to address hallucinations.
arXiv Detail & Related papers (2024-02-16T11:55:40Z) - Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus [99.33091772494751]
Large Language Models (LLMs) have gained significant popularity for their impressive performance across diverse fields.
LLMs are prone to hallucinate untruthful or nonsensical outputs that fail to meet user expectations.
We propose a novel reference-free, uncertainty-based method for detecting hallucinations in LLMs.
arXiv Detail & Related papers (2023-11-22T08:39:17Z) - Insights into Classifying and Mitigating LLMs' Hallucinations [48.04565928175536]
This paper delves into the underlying causes of AI hallucination and elucidates its significance in artificial intelligence.
We explore potential strategies to mitigate hallucinations, aiming to enhance the overall reliability of large language models.
arXiv Detail & Related papers (2023-11-14T12:30:28Z) - A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions [40.79317187623401]
The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP)
LLMs are prone to hallucination, generating plausible yet nonfactual content.
This phenomenon raises significant concerns over the reliability of LLMs in real-world information retrieval systems.
arXiv Detail & Related papers (2023-11-09T09:25:37Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Zero-Resource Hallucination Prevention for Large Language Models [45.4155729393135]
"Hallucination" refers to instances where large language models (LLMs) generate factually inaccurate or ungrounded information.
We introduce a novel pre-language self-evaluation technique, referred to as SELF-FAMILIARITY, which focuses on evaluating the model's familiarity with the concepts present in the input instruction.
We validate SELF-FAMILIARITY across four different large language models, demonstrating consistently superior performance compared to existing techniques.
arXiv Detail & Related papers (2023-09-06T01:57:36Z) - Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models [116.01843550398183]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks.
LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
arXiv Detail & Related papers (2023-09-03T16:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.