DHI: Leveraging Diverse Hallucination Induction for Enhanced Contrastive Factuality Control in Large Language Models
- URL: http://arxiv.org/abs/2601.01156v1
- Date: Sat, 03 Jan 2026 10:55:41 GMT
- Title: DHI: Leveraging Diverse Hallucination Induction for Enhanced Contrastive Factuality Control in Large Language Models
- Authors: Jiani Guo, Xiangke Zeng, Jie Wu, Zuchao Li,
- Abstract summary: Large language models (LLMs) frequently produce inaccurate or fabricated information, known as "hallucinations"<n>We propose DHI (Diverse Hallucination Induction), a novel training framework that generates a broader range of hallucinations without relying on pre-annotated data.<n> DHI achieves significant performance gains over other contrastive decoding-based approaches across multiple hallucination benchmarks.
- Score: 33.2779808039684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) frequently produce inaccurate or fabricated information, known as "hallucinations," which compromises their reliability. Existing approaches often train an "Evil LLM" to deliberately generate hallucinations on curated datasets, using these induced hallucinations to guide contrastive decoding against a reliable "positive model" for hallucination mitigation. However, this strategy is limited by the narrow diversity of hallucinations induced, as Evil LLMs trained on specific error types tend to reproduce only these particular patterns, thereby restricting their overall effectiveness. To address these limitations, we propose DHI (Diverse Hallucination Induction), a novel training framework that enables the Evil LLM to generate a broader range of hallucination types without relying on pre-annotated hallucination data. DHI employs a modified loss function that down-weights the generation of specific factually correct tokens, encouraging the Evil LLM to produce diverse hallucinations at targeted positions while maintaining overall factual content. Additionally, we introduce a causal attention masking adaptation to reduce the impact of this penalization on the generation of subsequent tokens. During inference, we apply an adaptive rationality constraint that restricts contrastive decoding to tokens where the positive model exhibits high confidence, thereby avoiding unnecessary penalties on factually correct tokens. Extensive empirical results show that DHI achieves significant performance gains over other contrastive decoding-based approaches across multiple hallucination benchmarks.
Related papers
- HIME: Mitigating Object Hallucinations in LVLMs via Hallucination Insensitivity Model Editing [6.021803204524807]
Large Vision-Language Models (LVLMs) have demonstrated impressive multimodal understanding capabilities.<n>LVLMs are prone to object hallucination, where models describe non-existent objects or attribute incorrect factual information.<n>We propose Hallucination Insensitivity Model Editing (HIME), a layer-adaptive weight editing approach that selectively modifies latent features to suppress hallucinations.
arXiv Detail & Related papers (2026-02-21T04:16:17Z) - PruneHal: Reducing Hallucinations in Multi-modal Large Language Models through Adaptive KV Cache Pruning [87.35309934860938]
hallucinations in large language models (MLLMs) are strongly associated with insufficient attention allocated to visual tokens.<n>We propose textbfPruneHal, a training-free, simple yet effective method that leverages adaptive KV cache pruning to enhance the model's focus on critical visual information.
arXiv Detail & Related papers (2025-10-22T02:41:07Z) - Exposing Hallucinations To Suppress Them: VLMs Representation Editing With Generative Anchors [8.089908150148554]
Multimodal large language models (MLLMs) have achieved remarkable success across diverse vision-language tasks.<n>MLLMs are highly susceptible to hallucinations, producing content that is fluent but inconsistent with visual evidence.<n>We propose a training-free, self-supervised method for hallucination mitigation.
arXiv Detail & Related papers (2025-09-26T07:24:28Z) - Mitigating Hallucinations in Large Vision-Language Models by Self-Injecting Hallucinations [73.37711261605271]
hallucination mitigation methods are mainly based on preference alignment and require external human annotations or auxiliary models for preference data collection.<n>We propose Autonomous Preference Alignment via Self-Injection (APASI), a novel and generalizable method that mitigates hallucinations without external dependencies.<n>APASI leverages the target LVLM to self-inject hallucinations into a generated response, creating a pair of responses with varying preference levels.
arXiv Detail & Related papers (2025-09-14T14:26:53Z) - HalluLens: LLM Hallucination Benchmark [49.170128733508335]
Large language models (LLMs) often generate responses that deviate from user input or training data, a phenomenon known as "hallucination"<n>This paper introduces a comprehensive hallucination benchmark, incorporating both new extrinsic and existing intrinsic evaluation tasks.
arXiv Detail & Related papers (2025-04-24T13:40:27Z) - Generate, but Verify: Reducing Hallucination in Vision-Language Models with Retrospective Resampling [78.78822033285938]
Vision-Language Models (VLMs) excel at visual understanding but often suffer from visual hallucinations.<n>In this work, we introduce REVERSE, a unified framework that integrates hallucination-aware training with on-the-fly self-verification.
arXiv Detail & Related papers (2025-04-17T17:59:22Z) - Decoupling Contrastive Decoding: Robust Hallucination Mitigation in Multimodal Large Language Models [23.846371966667927]
multimodal large language models (MLLMs) generate outputs misaligned with obvious visual or factual evidence.<n>We propose a novel framework: Decoupling Contrastive Decoding (DCD)<n>DCD decouples the learning of positive and negative samples in preference datasets, and trains separate positive and negative image projections within the MLLM.
arXiv Detail & Related papers (2025-04-09T02:59:18Z) - Alleviating Hallucinations in Large Vision-Language Models through Hallucination-Induced Optimization [123.54980913741828]
Large Visual Language Models (LVLMs) have demonstrated exceptional abilities in understanding multimodal data.<n>They invariably suffer from hallucinations, leading to a disconnect between the generated text and the corresponding images.<n>Almost all current visual contrastive decoding methods attempt to mitigate these hallucinations by introducing visual uncertainty information.<n>However, they struggle to precisely induce the hallucinatory tokens, which severely limits their effectiveness in mitigating hallucinations.
arXiv Detail & Related papers (2024-05-24T08:46:31Z) - Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding [25.489832294197797]
This paper introduces the Instruction Contrastive Decoding (ICD) method, a novel approach designed to reduce hallucinations during LVLM inference.
Our method is inspired by our observation that what we call disturbance instructions significantly exacerbate hallucinations in multimodal fusion modules.
arXiv Detail & Related papers (2024-03-27T16:04:47Z) - Alleviating Hallucinations of Large Language Models through Induced
Hallucinations [67.35512483340837]
Large language models (LLMs) have been observed to generate responses that include inaccurate or fabricated information.
We propose a simple textitInduce-then-Contrast Decoding (ICD) strategy to alleviate hallucinations.
arXiv Detail & Related papers (2023-12-25T12:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.