Beyond Misinformation: A Conceptual Framework for Studying AI Hallucinations in (Science) Communication
- URL: http://arxiv.org/abs/2504.13777v1
- Date: Fri, 18 Apr 2025 16:26:02 GMT
- Title: Beyond Misinformation: A Conceptual Framework for Studying AI Hallucinations in (Science) Communication
- Authors: Anqi Shao,
- Abstract summary: This paper proposes a conceptual framework for understanding AI hallucinations as a distinct form of misinformation.<n>I argue that these AI hallucinations should not be treated merely as technical failures but as communication phenomena with social consequences.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes a conceptual framework for understanding AI hallucinations as a distinct form of misinformation. While misinformation scholarship has traditionally focused on human intent, generative AI systems now produce false yet plausible outputs absent of such intent. I argue that these AI hallucinations should not be treated merely as technical failures but as communication phenomena with social consequences. Drawing on a supply-and-demand model and the concept of distributed agency, the framework outlines how hallucinations differ from human-generated misinformation in production, perception, and institutional response. I conclude by outlining a research agenda for communication scholars to investigate the emergence, dissemination, and audience reception of hallucinated content, with attention to macro (institutional), meso (group), and micro (individual) levels. This work urges communication researchers to rethink the boundaries of misinformation theory in light of probabilistic, non-human actors increasingly embedded in knowledge production.
Related papers
- AI Hallucination from Students' Perspective: A Thematic Analysis [0.6553031877558699]
hallucinations pose a growing threat to learning as students increasingly rely on large language models.<n>This study explores how students experience hallucinations, their detection strategies, and their mental models of why hallucinations occur.<n>Findings illuminate vulnerabilities in AI-supported learning and highlight the need for explicit instruction in verification protocols.
arXiv Detail & Related papers (2026-01-11T02:38:43Z) - Hallucination is Inevitable for LLMs with the Open World Assumption [10.473344768196908]
Large Language Models (LLMs) exhibit impressive linguistic competence but also produce inaccurate or fabricated outputs, often called hallucinations''<n>This paper reframes hallucination'' as a manifestation of the generalization problem.
arXiv Detail & Related papers (2025-09-29T13:38:44Z) - Review of Hallucination Understanding in Large Language and Vision Models [65.29139004945712]
We present a framework for characterizing both image and text hallucinations across diverse applications.<n>Our investigations reveal that hallucinations often stem from predictable patterns in data distributions and inherited biases.<n>This survey provides a foundation for developing more robust and effective solutions to hallucinations in real-world generative AI systems.
arXiv Detail & Related papers (2025-09-26T09:23:08Z) - The Way We Prompt: Conceptual Blending, Neural Dynamics, and Prompt-Induced Transitions in LLMs [0.0]
Large language models (LLMs) exhibit behaviors that often evoke a sense of personality and intelligence.<n>This work proposes prompt engineering as a scientific method for probing the deep structure of meaning itself.
arXiv Detail & Related papers (2025-05-16T07:37:21Z) - Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations [82.42811602081692]
This paper introduces a subsequence association framework to systematically trace and understand hallucinations.<n>Key insight is hallucinations that arise when dominant hallucinatory associations outweigh faithful ones.<n>We propose a tracing algorithm that identifies causal subsequences by analyzing hallucination probabilities across randomized input contexts.
arXiv Detail & Related papers (2025-04-17T06:34:45Z) - Hallucination, reliability, and the role of generative AI in science [0.05657375260432172]
Some arguments suggest that hallucinations are an inevitable consequence of the mechanisms underlying generative inference.<n>I argue that although corrosive hallucinations do pose a threat to scientific reliability, they are not inevitable.
arXiv Detail & Related papers (2025-04-11T13:38:56Z) - Medical Hallucinations in Foundation Models and Their Impact on Healthcare [53.97060824532454]
Foundation Models that are capable of processing and generating multi-modal data have transformed AI's role in medicine.<n>We define medical hallucination as any instance in which a model generates misleading medical content.<n>Our results reveal that inference techniques such as Chain-of-Thought (CoT) and Search Augmented Generation can effectively reduce hallucination rates.<n>These findings underscore the ethical and practical imperative for robust detection and mitigation strategies.
arXiv Detail & Related papers (2025-02-26T02:30:44Z) - The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination [85.18584652829799]
We introduce a novel framework to quantify factual hallucinations by modeling knowledge overshadowing.<n>We propose a new decoding strategy CoDa, to mitigate hallucinations, which notably enhance model factuality on Overshadow (27.9%), MemoTrap (13.1%) and NQ-Swap (18.3%)
arXiv Detail & Related papers (2025-02-22T08:36:06Z) - Who Brings the Frisbee: Probing Hidden Hallucination Factors in Large Vision-Language Model via Causality Analysis [14.033320167387194]
A major challenge in their real-world application is hallucination, where LVLMs generate non-existent visual elements, eroding user trust.<n>We hypothesize that hidden factors, such as objects, contexts, and semantic foreground-background structures, induce hallucination.<n>By analyzing the causality between images, text prompts, and network saliency, we systematically explore interventions to block these factors.
arXiv Detail & Related papers (2024-12-04T01:23:57Z) - LLMs Will Always Hallucinate, and We Need to Live With This [1.3810901729134184]
This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems.
It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms.
arXiv Detail & Related papers (2024-09-09T16:01:58Z) - A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation [51.53917938874146]
We propose a possible solution for alleviating the hallucination in KGD by exploiting the dialogue-knowledge interaction.
Experimental results of our example implementation show that this method can reduce hallucination without disrupting other dialogue performance.
arXiv Detail & Related papers (2024-04-04T14:45:26Z) - Hallucinations in Neural Automatic Speech Recognition: Identifying
Errors and Hallucinatory Models [11.492702369437785]
Hallucinations are semantically unrelated to the source utterance, yet still fluent and coherent.
We show that commonly used metrics, such as word error rates, cannot differentiate between hallucinatory and non-hallucinatory models.
We devise a framework for identifying hallucinations by analysing their semantic connection with the ground truth and their fluency.
arXiv Detail & Related papers (2024-01-03T06:56:56Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - On Computational Mechanisms for Shared Intentionality, and Speculation
on Rationality and Consciousness [0.0]
A singular attribute of humankind is our ability to undertake novel, cooperative behavior, or teamwork.
This requires that we can communicate goals, plans, and ideas between the brains of individuals to create shared intentionality.
I derive necessary characteristics of basic mechanisms to enable shared intentionality between prelinguistic computational agents.
arXiv Detail & Related papers (2023-06-03T21:31:38Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.