Hallucinating with AI: AI Psychosis as Distributed Delusions
- URL: http://arxiv.org/abs/2508.19588v1
- Date: Wed, 27 Aug 2025 05:51:19 GMT
- Title: Hallucinating with AI: AI Psychosis as Distributed Delusions
- Authors: Lucy Osler,
- Abstract summary: generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok create false outputs.<n>In popular terminology, these have been dubbed AI hallucinations.<n>I argue that when viewed through the lens of distributed cognition theory, we can better see the ways in which inaccurate beliefs, distorted memories and self-narratives, and delusional thinking can emerge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is much discussion of the false outputs that generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok create. In popular terminology, these have been dubbed AI hallucinations. However, deeming these AI outputs hallucinations is controversial, with many claiming this is a metaphorical misnomer. Nevertheless, in this paper, I argue that when viewed through the lens of distributed cognition theory, we can better see the dynamic and troubling ways in which inaccurate beliefs, distorted memories and self-narratives, and delusional thinking can emerge through human-AI interactions; examples of which are popularly being referred to as cases of AI psychosis. In such cases, I suggest we move away from thinking about how an AI system might hallucinate at us, by generating false outputs, to thinking about how, when we routinely rely on generative AI to help us think, remember, and narrate, we can come to hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but it can also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives, such as in the case of Jaswant Singh Chail. I also examine how the conversational style of chatbots can lead them to play a dual-function, both as a cognitive artefact and a quasi-Other with whom we co-construct our beliefs, narratives, and our realities. It is this dual function, I suggest, that makes generative AI an unusual, and particularly seductive, case of distributed cognition.
Related papers
- Towards a Theory of AI Personhood [1.6317061277457001]
We outline necessary conditions for AI personhood, focusing on agency, theory-of-mind, and self-awareness.<n>If AI systems can be considered persons, then typical framings of AI alignment may be incomplete.
arXiv Detail & Related papers (2025-01-23T10:31:26Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Social Evolution of Published Text and The Emergence of Artificial Intelligence Through Large Language Models and The Problem of Toxicity and Bias [0.0]
We provide a birds eye view of the rapid developments in AI and Deep Learning that has led to the emergence of AI in Large Language Models.
We point out toxicity, bias, memorization, sycophancy, logical inconsistencies, that exist just as a warning to the overly optimistic.
arXiv Detail & Related papers (2024-02-11T11:23:28Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Epistemic considerations when AI answers questions for us [0.0]
We argue that careless reliance on AI to answer our questions and to judge our output is a violation of Grice's Maxim of Quality and Lemoine's legal Maxim of Innocence.
What is missing in the focus on output and results of AI-generated and AI-evaluated content is, apart from paying proper tribute, the demand to follow a person's thought process.
arXiv Detail & Related papers (2023-04-23T08:26:42Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Crossing the Tepper Line: An Emerging Ontology for Describing the
Dynamic Sociality of Embodied AI [0.9176056742068814]
We show how embodied AI can manifest as "socially embodied AI"
We define this as the state that embodied AI "circumstantially" take on within interactive contexts when perceived as both social and agentic by people.
arXiv Detail & Related papers (2021-03-15T00:45:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.