Why LVLMs Are More Prone to Hallucinations in Longer Responses: The Role of Context
- URL: http://arxiv.org/abs/2510.20229v1
- Date: Thu, 23 Oct 2025 05:22:07 GMT
- Title: Why LVLMs Are More Prone to Hallucinations in Longer Responses: The Role of Context
- Authors: Ge Zheng, Jiaye Qian, Jiajin Tang, Sibei Yang,
- Abstract summary: Large Vision-Language Models (LVLMs) have made significant progress in recent years but are prone to hallucination issues.<n>In this paper, we ask: Does increased hallucination result solely from length-induced errors, or is there a deeper underlying mechanism?<n>We propose a novel "induce-detect-suppress" framework that actively induces hallucinations through deliberately designed contexts.
- Score: 34.903722603279014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Vision-Language Models (LVLMs) have made significant progress in recent years but are also prone to hallucination issues. They exhibit more hallucinations in longer, free-form responses, often attributed to accumulated uncertainties. In this paper, we ask: Does increased hallucination result solely from length-induced errors, or is there a deeper underlying mechanism? After a series of preliminary experiments and findings, we suggest that the risk of hallucinations is not caused by length itself but by the increased reliance on context for coherence and completeness in longer responses. Building on these insights, we propose a novel "induce-detect-suppress" framework that actively induces hallucinations through deliberately designed contexts, leverages induced instances for early detection of high-risk cases, and ultimately suppresses potential object-level hallucinations during actual decoding. Our approach achieves consistent, significant improvements across all benchmarks, demonstrating its efficacy. The strong detection and improved hallucination mitigation not only validate our framework but, more importantly, re-validate our hypothesis on context. Rather than solely pursuing performance gains, this study aims to provide new insights and serves as a first step toward a deeper exploration of hallucinations in LVLMs' longer responses.
Related papers
- Test-Time Scaling in Reasoning Models Is Not Effective for Knowledge-Intensive Tasks Yet [93.00109641811788]
Test-time scaling increases inference-time computation by allowing models to generate long reasoning chains.<n>We show that this approach is not yet effective for knowledge-intensive tasks, where high factual accuracy and low hallucination rates are essential.<n>Our results reveal that increasing test-time computation does not consistently improve accuracy and, in many cases, it even leads to more hallucinations.
arXiv Detail & Related papers (2025-09-08T16:28:25Z) - Two Causes, Not One: Rethinking Omission and Fabrication Hallucinations in MLLMs [31.601057368065877]
Existing methods, based on the flawed assumption that omission and fabrication hallucinations share a common cause, often reduce omissions only to trigger more fabrications.<n>In this work, we overturn this view by demonstrating that omission hallucinations arise from insufficient confidence when mapping perceived visual features to linguistic expressions.<n>We propose the Visual-Semantic Attention Potential Field, a conceptual framework that reveals how visual evidence to infer the presence or absence of objects.
arXiv Detail & Related papers (2025-08-30T05:47:41Z) - Mitigating Behavioral Hallucination in Multimodal Large Language Models for Sequential Images [6.48620624181578]
We introduce SHE (Sequence Hallucination Eradication), a lightweight framework that detects hallucinations and mitigates them.<n>We also propose a new metric (BEACH) to quantify behavioral hallucination severity.
arXiv Detail & Related papers (2025-06-08T15:08:52Z) - Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models [0.0]
Hallucinations in large language models (LLMs) present a growing challenge across real-world applications.<n>We propose a prompt-based framework to systematically trigger and quantify hallucination.
arXiv Detail & Related papers (2025-05-01T14:33:47Z) - HalluLens: LLM Hallucination Benchmark [49.170128733508335]
Large language models (LLMs) often generate responses that deviate from user input or training data, a phenomenon known as "hallucination"<n>This paper introduces a comprehensive hallucination benchmark, incorporating both new extrinsic and existing intrinsic evaluation tasks.
arXiv Detail & Related papers (2025-04-24T13:40:27Z) - Delusions of Large Language Models [62.43923767408462]
Large Language Models often generate factually incorrect but plausible outputs, known as hallucinations.<n>We identify a more insidious phenomenon, LLM delusion, defined as high belief hallucinations, incorrect outputs with abnormally high confidence, making them harder to detect and mitigate.
arXiv Detail & Related papers (2025-03-09T17:59:16Z) - Trust Me, I'm Wrong: LLMs Hallucinate with Certainty Despite Knowing the Answer [51.7407540261676]
We investigate a distinct type of hallucination, where a model can consistently answer a question correctly, but a seemingly trivial perturbation causes it to produce a hallucinated response with high certainty.<n>This phenomenon is particularly concerning in high-stakes domains such as medicine or law, where model certainty is often used as a proxy for reliability.<n>We show that CHOKE examples are consistent across prompts, occur in different models and datasets, and are fundamentally distinct from other hallucinations.
arXiv Detail & Related papers (2025-02-18T15:46:31Z) - HalluEntity: Benchmarking and Understanding Entity-Level Hallucination Detection [16.27352940098609]
We propose a new data set, HalluEntity, which annotates hallucination at the entity level.<n>Based on the dataset, we evaluate uncertainty-based hallucination detection approaches across 17 modern LLMs.<n>Our experimental results show that uncertainty estimation approaches focusing on individual token probabilities tend to over-predict hallucinations.
arXiv Detail & Related papers (2025-02-17T16:01:41Z) - Combating Multimodal LLM Hallucination via Bottom-Up Holistic Reasoning [151.4060202671114]
multimodal large language models (MLLMs) have shown unprecedented capabilities in advancing vision-language tasks.<n>This paper introduces a novel bottom-up reasoning framework to address hallucinations in MLLMs.<n>Our framework systematically addresses potential issues in both visual and textual inputs by verifying and integrating perception-level information with cognition-level commonsense knowledge.
arXiv Detail & Related papers (2024-12-15T09:10:46Z) - Who Brings the Frisbee: Probing Hidden Hallucination Factors in Large Vision-Language Model via Causality Analysis [14.033320167387194]
A major challenge in their real-world application is hallucination, where LVLMs generate non-existent visual elements, eroding user trust.<n>We hypothesize that hidden factors, such as objects, contexts, and semantic foreground-background structures, induce hallucination.<n>By analyzing the causality between images, text prompts, and network saliency, we systematically explore interventions to block these factors.
arXiv Detail & Related papers (2024-12-04T01:23:57Z) - ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models [65.12177400764506]
Large language models (LLMs) exhibit hallucinations in long-form question-answering tasks across various domains and wide applications.<n>Current hallucination detection and mitigation datasets are limited in domains and sizes.<n>This paper introduces an iterative self-training framework that simultaneously and progressively scales up the hallucination annotation dataset.
arXiv Detail & Related papers (2024-07-05T17:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.