Incentives or Ontology? A Structural Rebuttal to OpenAI's Hallucination Thesis
- URL: http://arxiv.org/abs/2512.14801v1
- Date: Tue, 16 Dec 2025 17:39:45 GMT
- Title: Incentives or Ontology? A Structural Rebuttal to OpenAI's Hallucination Thesis
- Authors: Richard Ackermann, Simeon Emanuilov,
- Abstract summary: We argue that hallucination is not an optimization failure but an architectural inevitability of the transformer model.<n>Our empirical results demonstrate that hallucination can only be eliminated through external truth-validation and abstention modules.<n>We conclude that hallucination is a structural property of generative architectures.
- Score: 0.42970700836450487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: OpenAI has recently argued that hallucinations in large language models result primarily from misaligned evaluation incentives that reward confident guessing rather than epistemic humility. On this view, hallucination is a contingent behavioral artifact, remediable through improved benchmarks and reward structures. In this paper, we challenge that interpretation. Drawing on previous work on structural hallucination and empirical experiments using a Licensing Oracle, we argue that hallucination is not an optimization failure but an architectural inevitability of the transformer model. Transformers do not represent the world; they model statistical associations among tokens. Their embedding spaces form a pseudo-ontology derived from linguistic co-occurrence rather than world-referential structure. At ontological boundary conditions - regions where training data is sparse or incoherent - the model necessarily interpolates fictional continuations in order to preserve coherence. No incentive mechanism can modify this structural dependence on pattern completion. Our empirical results demonstrate that hallucination can only be eliminated through external truth-validation and abstention modules, not through changes to incentives, prompting, or fine-tuning. The Licensing Oracle achieves perfect abstention precision across domains precisely because it supplies grounding that the transformer lacks. We conclude that hallucination is a structural property of generative architectures and that reliable AI requires hybrid systems that distinguish linguistic fluency from epistemic responsibility.
Related papers
- Distributional Semantics Tracing: A Framework for Explaining Hallucinations in Large Language Models [4.946483489399819]
Large Language Models (LLMs) are prone to hallucination, the generation of factually incorrect statements.<n>This work investigates the intrinsic, architectural origins of this failure mode through three primary contributions.
arXiv Detail & Related papers (2025-10-07T16:40:31Z) - Hallucination is Inevitable for LLMs with the Open World Assumption [10.473344768196908]
Large Language Models (LLMs) exhibit impressive linguistic competence but also produce inaccurate or fabricated outputs, often called hallucinations''<n>This paper reframes hallucination'' as a manifestation of the generalization problem.
arXiv Detail & Related papers (2025-09-29T13:38:44Z) - Review of Hallucination Understanding in Large Language and Vision Models [65.29139004945712]
We present a framework for characterizing both image and text hallucinations across diverse applications.<n>Our investigations reveal that hallucinations often stem from predictable patterns in data distributions and inherited biases.<n>This survey provides a foundation for developing more robust and effective solutions to hallucinations in real-world generative AI systems.
arXiv Detail & Related papers (2025-09-26T09:23:08Z) - How Large Language Models are Designed to Hallucinate [0.42970700836450487]
We argue that hallucination is a structural outcome of the transformer architecture.<n>Our contribution is threefold: (1) a comparative account showing why existing explanations are insufficient; (2) a predictive taxonomy of hallucination linked to existential structures with proposed benchmarks; and (3) design directions toward "truth-constrained" architectures capable of withholding or deferring when disclosure is absent.
arXiv Detail & Related papers (2025-09-19T16:46:27Z) - Auditing Meta-Cognitive Hallucinations in Reasoning Large Language Models [12.747507415841168]
We study the causality of hallucinations under constrained knowledge domains by auditing the Chain-of-Thought trajectory.<n>Our analysis reveals that in long-CoT settings, RLLMs can iteratively reinforce biases and errors through flawed reflective reasoning.<n>Surprisingly, even direct interventions at the origin of hallucinations often fail to reverse their effects, as reasoning chains exhibit 'chain disloyalty'
arXiv Detail & Related papers (2025-05-19T14:11:09Z) - HalluLens: LLM Hallucination Benchmark [49.170128733508335]
Large language models (LLMs) often generate responses that deviate from user input or training data, a phenomenon known as "hallucination"<n>This paper introduces a comprehensive hallucination benchmark, incorporating both new extrinsic and existing intrinsic evaluation tasks.
arXiv Detail & Related papers (2025-04-24T13:40:27Z) - Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations [82.42811602081692]
This paper introduces a subsequence association framework to systematically trace and understand hallucinations.<n>Key insight is hallucinations that arise when dominant hallucinatory associations outweigh faithful ones.<n>We propose a tracing algorithm that identifies causal subsequences by analyzing hallucination probabilities across randomized input contexts.
arXiv Detail & Related papers (2025-04-17T06:34:45Z) - LLMs Will Always Hallucinate, and We Need to Live With This [1.3810901729134184]
This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems.
It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms.
arXiv Detail & Related papers (2024-09-09T16:01:58Z) - Do Androids Know They're Only Dreaming of Electric Sheep? [45.513432353811474]
We design probes trained on the internal representations of a transformer language model to predict its hallucinatory behavior.
Our probes are narrowly trained and we find that they are sensitive to their training domain.
We find that probing is a feasible and efficient alternative to language model hallucination evaluation when model states are available.
arXiv Detail & Related papers (2023-12-28T18:59:50Z) - Exposing Attention Glitches with Flip-Flop Language Modeling [55.0688535574859]
This work identifies and analyzes the phenomenon of attention glitches in large language models.
We introduce flip-flop language modeling (FFLM), a family of synthetic benchmarks designed to probe the extrapolative behavior of neural language models.
We find that Transformer FFLMs suffer from a long tail of sporadic reasoning errors, some of which we can eliminate using various regularization techniques.
arXiv Detail & Related papers (2023-06-01T17:44:35Z) - Don't Say What You Don't Know: Improving the Consistency of Abstractive
Summarization by Constraining Beam Search [54.286450484332505]
We analyze the connection between hallucinations and training data, and find evidence that models hallucinate because they train on target summaries that are unsupported by the source.
We present PINOCCHIO, a new decoding method that improves the consistency of a transformer-based abstractive summarizer by constraining beam search to avoid hallucinations.
arXiv Detail & Related papers (2022-03-16T07:13:52Z) - On Hallucination and Predictive Uncertainty in Conditional Language
Generation [76.18783678114325]
Higher predictive uncertainty corresponds to a higher chance of hallucination.
Epistemic uncertainty is more indicative of hallucination than aleatoric or total uncertainties.
It helps to achieve better results of trading performance in standard metric for less hallucination with the proposed beam search variant.
arXiv Detail & Related papers (2021-03-28T00:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.