Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations
- URL: http://arxiv.org/abs/2601.07422v1
- Date: Mon, 12 Jan 2026 11:10:43 GMT
- Title: Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations
- Authors: Wen Luo, Guangyue Peng, Wei Li, Shaohang Wei, Feifan Song, Liang Wang, Nan Yang, Xingxing Zhang, Jing Jin, Furu Wei, Houfeng Wang,
- Abstract summary: Large language models (LLMs) frequently generate hallucinations.<n>Previous work shows that their internal states encode rich signals of truthfulness.<n>This paper shows that truthfulness cues arise from two distinct information pathways.
- Score: 70.43616821802249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite their impressive capabilities, large language models (LLMs) frequently generate hallucinations. Previous work shows that their internal states encode rich signals of truthfulness, yet the origins and mechanisms of these signals remain unclear. In this paper, we demonstrate that truthfulness cues arise from two distinct information pathways: (1) a Question-Anchored pathway that depends on question-answer information flow, and (2) an Answer-Anchored pathway that derives self-contained evidence from the generated answer itself. First, we validate and disentangle these pathways through attention knockout and token patching. Afterwards, we uncover notable and intriguing properties of these two mechanisms. Further experiments reveal that (1) the two mechanisms are closely associated with LLM knowledge boundaries; and (2) internal representations are aware of their distinctions. Finally, building on these insightful findings, two applications are proposed to enhance hallucination detection performance. Overall, our work provides new insight into how LLMs internally encode truthfulness, offering directions for more reliable and self-aware generative systems.
Related papers
- HACK: Hallucinations Along Certainty and Knowledge Axes [66.66625343090743]
We propose a framework for categorizing hallucinations along two axes: knowledge and certainty.<n>We identify a particularly concerning subset of hallucinations where models hallucinate with certainty despite having the correct knowledge internally.
arXiv Detail & Related papers (2025-10-28T09:34:31Z) - Large Language Models Do NOT Really Know What They Don't Know [37.641827402866845]
Recent work suggests that large language models (LLMs) encode factuality signals in their internal representations.<n>LLMs can also produce factual errors by relying on shortcuts or spurious associations.
arXiv Detail & Related papers (2025-10-10T06:09:04Z) - Correcting Hallucinations in News Summaries: Exploration of Self-Correcting LLM Methods with External Knowledge [5.065947993017158]
Large language models (LLMs) have shown remarkable capabilities to generate coherent text.<n>They suffer from the issue of hallucinations -- factually inaccurate statements.<n>We investigate two state-of-the-art self-correcting systems by applying them to correct hallucinated summaries using evidence from three search engines.
arXiv Detail & Related papers (2025-06-24T13:20:31Z) - Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language Models [0.0]
Hallucinations in large language models (LLMs) present a growing challenge across real-world applications.<n>We propose a prompt-based framework to systematically trigger and quantify hallucination.
arXiv Detail & Related papers (2025-05-01T14:33:47Z) - HalluLens: LLM Hallucination Benchmark [49.170128733508335]
Large language models (LLMs) often generate responses that deviate from user input or training data, a phenomenon known as "hallucination"<n>This paper introduces a comprehensive hallucination benchmark, incorporating both new extrinsic and existing intrinsic evaluation tasks.
arXiv Detail & Related papers (2025-04-24T13:40:27Z) - LLM Internal States Reveal Hallucination Risk Faced With a Query [62.29558761326031]
Humans have a self-awareness process that allows us to recognize what we don't know when faced with queries.
This paper investigates whether Large Language Models can estimate their own hallucination risk before response generation.
By a probing estimator, we leverage LLM self-assessment, achieving an average hallucination estimation accuracy of 84.32% at run time.
arXiv Detail & Related papers (2024-07-03T17:08:52Z) - KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual Checking [55.2155025063668]
KnowHalu is a novel approach for detecting hallucinations in text generated by large language models (LLMs)
It uses step-wise reasoning, multi-formulation query, multi-form knowledge for factual checking, and fusion-based detection mechanism.
Our evaluations demonstrate that KnowHalu significantly outperforms SOTA baselines in detecting hallucinations across diverse tasks.
arXiv Detail & Related papers (2024-04-03T02:52:07Z) - Mechanistic Understanding and Mitigation of Language Model Non-Factual Hallucinations [42.46721214112836]
State-of-the-art language models (LMs) sometimes generate non-factual hallucinations that misalign with world knowledge.
We create diagnostic datasets with subject-relation queries and adapt interpretability methods to trace hallucinations through internal model representations.
arXiv Detail & Related papers (2024-03-27T00:23:03Z) - Rowen: Adaptive Retrieval-Augmented Generation for Hallucination Mitigation in LLMs [88.75700174889538]
Hallucinations present a significant challenge for large language models (LLMs)<n>The utilization of parametric knowledge in generating factual content is constrained by the limited knowledge of LLMs.<n>We present Rowen, a novel framework that enhances LLMs with an adaptive retrieval augmentation process tailored to address hallucinated outputs.
arXiv Detail & Related papers (2024-02-16T11:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.