Halo: Estimation and Reduction of Hallucinations in Open-Source Weak
Large Language Models
- URL: http://arxiv.org/abs/2308.11764v4
- Date: Wed, 13 Sep 2023 18:01:36 GMT
- Title: Halo: Estimation and Reduction of Hallucinations in Open-Source Weak
Large Language Models
- Authors: Mohamed Elaraby, Mengyin Lu, Jacob Dunn, Xueying Zhang, Yu Wang,
Shizhu Liu, Pingchuan Tian, Yuping Wang, Yuxuan Wang
- Abstract summary: Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP)
Open-source LLMs with fewer parameters often suffer from severe hallucinations compared to their larger counterparts.
This paper focuses on measuring and reducing hallucinations in BLOOM 7B, a representative of such weaker open-source LLMs.
- Score: 11.497989461290793
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Large Language Models (LLMs) have revolutionized Natural Language Processing
(NLP). Although convenient for research and practical applications, open-source
LLMs with fewer parameters often suffer from severe hallucinations compared to
their larger counterparts. This paper focuses on measuring and reducing
hallucinations in BLOOM 7B, a representative of such weaker open-source LLMs
that are publicly available for research and commercial applications. We
introduce HaloCheck, a lightweight BlackBox knowledge-free framework designed
to quantify the severity of hallucinations in LLMs. Additionally, we explore
techniques like knowledge injection and teacher-student approaches to alleviate
hallucinations in low-parameter LLMs. Our experiments effectively demonstrate
the reduction of hallucinations in challenging domains for these LLMs.
Related papers
- A Novel Approach to Eliminating Hallucinations in Large Language Model-Assisted Causal Discovery [21.2023350773338]
We show that hallucinations exist when using large language models (LLMs) in causal discovery.
We propose using Retrieval Augmented Generation (RAG) to reduce hallucinations when quality data is available.
arXiv Detail & Related papers (2024-11-16T03:06:39Z) - A Survey of Hallucination in Large Visual Language Models [48.794850395309076]
The existence of hallucinations has limited the potential and practical effectiveness of LVLM in various fields.
The structure of LVLMs and main causes of hallucination generation are introduced.
The available hallucination evaluation benchmarks for LVLMs are presented.
arXiv Detail & Related papers (2024-10-20T10:58:58Z) - SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination Detection [10.54378596443678]
Large language models (LLMs) are highly capable but face latency challenges in real-time applications.
This study optimize the real-time interpretable hallucination detection by introducing effective prompting techniques.
arXiv Detail & Related papers (2024-08-22T22:13:13Z) - Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models [70.19081534515371]
Large Language Models (LLMs) have gained widespread adoption in various natural language processing tasks.
They generate unfaithful or inconsistent content that deviates from the input source, leading to severe consequences.
We propose a robust discriminator named RelD to effectively detect hallucination in LLMs' generated answers.
arXiv Detail & Related papers (2024-07-04T18:47:42Z) - Exploring and Evaluating Hallucinations in LLM-Powered Code Generation [14.438161741833687]
Large Language Models (LLMs) produce outputs that deviate from users' intent, exhibit internal inconsistencies, or misalign with factual knowledge.
Existing work mainly focuses on investing the hallucination in the domain of natural language generation.
We conduct a thematic analysis of the LLM-generated code to summarize and categorize the hallucinations present in it.
We propose HalluCode, a benchmark for evaluating the performance of code LLMs in recognizing hallucinations.
arXiv Detail & Related papers (2024-04-01T07:31:45Z) - Unsupervised Real-Time Hallucination Detection based on the Internal States of Large Language Models [12.27217471495276]
Hallucinations in large language models (LLMs) produce responses that are coherent but factually inaccurate.
We present MIND, an unsupervised training framework that leverages the internal states of LLMs for real-time hallucination detection.
We also present HELM, a new benchmark for evaluating hallucination detection across multiple LLMs.
arXiv Detail & Related papers (2024-03-11T05:51:03Z) - The Dawn After the Dark: An Empirical Study on Factuality Hallucination
in Large Language Models [134.6697160940223]
hallucination poses great challenge to trustworthy and reliable deployment of large language models.
Three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them.
This work presents a systematic empirical study on LLM hallucination, focused on the the three aspects of hallucination detection, source and mitigation.
arXiv Detail & Related papers (2024-01-06T12:40:45Z) - A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions [40.79317187623401]
The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP)
LLMs are prone to hallucination, generating plausible yet nonfactual content.
This phenomenon raises significant concerns over the reliability of LLMs in real-world information retrieval systems.
arXiv Detail & Related papers (2023-11-09T09:25:37Z) - Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models [116.01843550398183]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks.
LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
arXiv Detail & Related papers (2023-09-03T16:56:48Z) - Evaluation and Analysis of Hallucination in Large Vision-Language Models [49.19829480199372]
Large Vision-Language Models (LVLMs) have recently achieved remarkable success.
LVLMs are still plagued by the hallucination problem.
Hallucination refers to the information of LVLMs' responses that does not exist in the visual input.
arXiv Detail & Related papers (2023-08-29T08:51:24Z) - Contrastive Learning Reduces Hallucination in Conversations [76.55116206021346]
We propose a contrastive learning scheme, named MixCL.
A novel mixed contrastive objective is proposed to explicitly optimize the implicit knowledge elicitation process of LMs.
We show that MixCL achieves comparable performance to state-of-the-art KB-based approaches.
arXiv Detail & Related papers (2022-12-20T16:26:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.