LLM Lies: Hallucinations are not Bugs, but Features as Adversarial
Examples
- URL: http://arxiv.org/abs/2310.01469v2
- Date: Wed, 4 Oct 2023 17:53:49 GMT
- Title: LLM Lies: Hallucinations are not Bugs, but Features as Adversarial
Examples
- Authors: Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Li Yuan
- Abstract summary: We show that non-sense prompts composed of random tokens can also elicit the LLMs to respond with hallucinations.
This phenomenon forces us to revisit that hallucination may be another view of adversarial examples.
We formalize an automatic hallucination triggering method as the hallucination attack in an adversarial way.
- Score: 15.528923770249774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs), including GPT-3.5, LLaMA, and PaLM, seem to be
knowledgeable and able to adapt to many tasks. However, we still can not
completely trust their answer, since LLMs suffer from
hallucination--fabricating non-existent facts to cheat users without
perception. And the reasons for their existence and pervasiveness remain
unclear. In this paper, we demonstrate that non-sense prompts composed of
random tokens can also elicit the LLMs to respond with hallucinations. This
phenomenon forces us to revisit that hallucination may be another view of
adversarial examples, and it shares similar features with conventional
adversarial examples as the basic feature of LLMs. Therefore, we formalize an
automatic hallucination triggering method as the hallucination attack in an
adversarial way. Finally, we explore basic feature of attacked adversarial
prompts and propose a simple yet effective defense strategy. Our code is
released on GitHub.
Related papers
- WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries [64.239202960816]
We introduce WildHallucinations, a benchmark that evaluates factuality.
It does so by prompting large language models to generate information about entities mined from user-chatbot conversations in the wild.
We evaluate 118,785 generations from 15 LLMs on 7,919 entities.
arXiv Detail & Related papers (2024-07-24T17:59:05Z) - Look Within, Why LLMs Hallucinate: A Causal Perspective [16.874588396996764]
Large language models (LLMs) are a milestone in generative artificial intelligence, achieving significant success in text comprehension and generation tasks.
LLMs suffer from severe hallucination problems, posing significant challenges to the practical applications of LLMs.
We propose a method to intervene in LLMs' self-attention layers and maintain their structures and sizes intact.
arXiv Detail & Related papers (2024-07-14T10:47:44Z) - Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? [53.89380284760555]
Large vision-language models (LVLMs) produce captions that mention concepts that cannot be found in the image.
These hallucinations erode the trustworthiness of LVLMs and are arguably among the main obstacles to their ubiquitous adoption.
Recent work suggests that addition of grounding objectives -- those that explicitly align image regions or objects to text spans -- reduces the amount of LVLM hallucination.
arXiv Detail & Related papers (2024-06-20T16:56:11Z) - Do LLMs Know about Hallucination? An Empirical Investigation of LLM's
Hidden States [19.343629282494774]
Large Language Models (LLMs) can make up answers that are not real, and this is known as hallucination.
This research aims to see if, how, and to what extent LLMs are aware of hallucination.
arXiv Detail & Related papers (2024-02-15T06:14:55Z) - Hallucination is Inevitable: An Innate Limitation of Large Language
Models [3.8711997449980844]
We show that it is impossible to eliminate hallucination in large language models.
Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs.
arXiv Detail & Related papers (2024-01-22T10:26:14Z) - The Dawn After the Dark: An Empirical Study on Factuality Hallucination
in Large Language Models [134.6697160940223]
hallucination poses great challenge to trustworthy and reliable deployment of large language models.
Three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them.
This work presents a systematic empirical study on LLM hallucination, focused on the the three aspects of hallucination detection, source and mitigation.
arXiv Detail & Related papers (2024-01-06T12:40:45Z) - Alleviating Hallucinations of Large Language Models through Induced
Hallucinations [67.35512483340837]
Large language models (LLMs) have been observed to generate responses that include inaccurate or fabricated information.
We propose a simple textitInduce-then-Contrast Decoding (ICD) strategy to alleviate hallucinations.
arXiv Detail & Related papers (2023-12-25T12:32:49Z) - HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large
Language Models [146.87696738011712]
Large language models (LLMs) are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge.
To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation benchmark for Large Language Models (HaluEval)
arXiv Detail & Related papers (2023-05-19T15:36:27Z) - Evaluating Object Hallucination in Large Vision-Language Models [122.40337582958453]
This work presents the first systematic study on object hallucination of large vision-language models (LVLMs)
We find that LVLMs tend to generate objects that are inconsistent with the target images in the descriptions.
We propose a polling-based query method called POPE to evaluate the object hallucination.
arXiv Detail & Related papers (2023-05-17T16:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.