Towards Mitigating Hallucination in Large Language Models via
Self-Reflection
- URL: http://arxiv.org/abs/2310.06271v1
- Date: Tue, 10 Oct 2023 03:05:44 GMT
- Title: Towards Mitigating Hallucination in Large Language Models via
Self-Reflection
- Authors: Ziwei Ji, Tiezheng Yu, Yan Xu, Nayeon Lee, Etsuko Ishii, Pascale Fung
- Abstract summary: Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
- Score: 63.2543947174318
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have shown promise for generative and
knowledge-intensive tasks including question-answering (QA) tasks. However, the
practical deployment still faces challenges, notably the issue of
"hallucination", where models generate plausible-sounding but unfaithful or
nonsensical information. This issue becomes particularly critical in the
medical domain due to the uncommon professional concepts and potential social
risks involved. This paper analyses the phenomenon of hallucination in medical
generative QA systems using widely adopted LLMs and datasets. Our investigation
centers on the identification and comprehension of common problematic answers,
with a specific emphasis on hallucination. To tackle this challenge, we present
an interactive self-reflection methodology that incorporates knowledge
acquisition and answer generation. Through this feedback process, our approach
steadily enhances the factuality, consistency, and entailment of the generated
answers. Consequently, we harness the interactivity and multitasking ability of
LLMs and produce progressively more precise and accurate answers. Experimental
results on both automatic and human evaluation demonstrate the superiority of
our approach in hallucination reduction compared to baselines.
Related papers
- Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models [70.19081534515371]
Large Language Models (LLMs) have gained widespread adoption in various natural language processing tasks.
They generate unfaithful or inconsistent content that deviates from the input source, leading to severe consequences.
We propose a robust discriminator named RelD to effectively detect hallucination in LLMs' generated answers.
arXiv Detail & Related papers (2024-07-04T18:47:42Z) - Hallucination of Multimodal Large Language Models: A Survey [40.73148186369018]
multimodal large language models (MLLMs) have demonstrated significant advancements and remarkable abilities in multimodal tasks.
Despite these promising developments, MLLMs often generate outputs that are inconsistent with the visual content.
This survey aims to deepen the understanding of hallucinations in MLLMs and inspire further advancements in the field.
arXiv Detail & Related papers (2024-04-29T17:59:41Z) - A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation [51.53917938874146]
We propose a possible solution for alleviating the hallucination in KGD by exploiting the dialogue-knowledge interaction.
Experimental results of our example implementation show that this method can reduce hallucination without disrupting other dialogue performance.
arXiv Detail & Related papers (2024-04-04T14:45:26Z) - Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models [68.91592125175787]
Hallucinations pose a significant challenge for the practical implementation of large language models (LLMs)
We present Rowen, a novel approach that enhances LLMs with a selective retrieval augmentation process tailored to address hallucinations.
arXiv Detail & Related papers (2024-02-16T11:55:40Z) - Insights into Classifying and Mitigating LLMs' Hallucinations [48.04565928175536]
This paper delves into the underlying causes of AI hallucination and elucidates its significance in artificial intelligence.
We explore potential strategies to mitigate hallucinations, aiming to enhance the overall reliability of large language models.
arXiv Detail & Related papers (2023-11-14T12:30:28Z) - A Survey on Hallucination in Large Language Models: Principles,
Taxonomy, Challenges, and Open Questions [42.007305423982515]
Large language models (LLMs) produce hallucinations, resulting in content inconsistent with real-world facts or user inputs.
This survey aims to provide a thorough and in-depth overview of recent advances in the field of LLM hallucinations.
arXiv Detail & Related papers (2023-11-09T09:25:37Z) - AutoHall: Automated Hallucination Dataset Generation for Large Language Models [56.92068213969036]
This paper introduces a method for automatically constructing model-specific hallucination datasets based on existing fact-checking datasets called AutoHall.
We also propose a zero-resource and black-box hallucination detection method based on self-contradiction.
arXiv Detail & Related papers (2023-09-30T05:20:02Z) - Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models [116.01843550398183]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks.
LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
arXiv Detail & Related papers (2023-09-03T16:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.