Epistemological Fault Lines Between Human and Artificial Intelligence
- URL: http://arxiv.org/abs/2512.19466v1
- Date: Mon, 22 Dec 2025 15:20:21 GMT
- Title: Epistemological Fault Lines Between Human and Artificial Intelligence
- Authors: Walter Quattrociocchi, Valerio Capraro, Matjaž Perc,
- Abstract summary: We show that the apparent alignment between human and machine outputs conceals a deeper structural mismatch in how judgments are produced.<n>We argue that LLMs are not agents but pattern-completion systems, formally describable as walks on high-dimensional graphs of linguistic transitions.
- Score: 0.688204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) are widely described as artificial intelligence, yet their epistemic profile diverges sharply from human cognition. Here we show that the apparent alignment between human and machine outputs conceals a deeper structural mismatch in how judgments are produced. Tracing the historical shift from symbolic AI and information filtering systems to large-scale generative transformers, we argue that LLMs are not epistemic agents but stochastic pattern-completion systems, formally describable as walks on high-dimensional graphs of linguistic transitions rather than as systems that form beliefs or models of the world. By systematically mapping human and artificial epistemic pipelines, we identify seven epistemic fault lines, divergences in grounding, parsing, experience, motivation, causal reasoning, metacognition, and value. We call the resulting condition Epistemia: a structural situation in which linguistic plausibility substitutes for epistemic evaluation, producing the feeling of knowing without the labor of judgment. We conclude by outlining consequences for evaluation, governance, and epistemic literacy in societies increasingly organized around generative AI.
Related papers
- The AI Cognitive Trojan Horse: How Large Language Models May Bypass Human Epistemic Vigilance [0.0]
Large language model (LLM)-based conversational AI systems present a challenge to human cognition.<n>This paper proposes that a significant epistemic risk from conversational AI may lie not in inaccuracy or intentional deception, but in something more fundamental.
arXiv Detail & Related papers (2026-01-11T22:28:56Z) - Plausibility as Failure: How LLMs and Humans Co-Construct Epistemic Error [0.0]
This study examines how different forms of epistemic failure emerge, are masked, and are tolerated in human AI interaction.<n>Evaluators frequently conflated criteria such as correctness, relevance, bias, groundedness, and consistency, indicating that human judgment collapses analytical distinctions into intuitives shaped by form and fluency.<n>The study provides implications for LLM assessment, digital literacy, and the design of trustworthy human AI communication.
arXiv Detail & Related papers (2025-12-18T16:45:29Z) - Embodied AI: From LLMs to World Models [65.68972714346909]
Embodied Artificial Intelligence (AI) is an intelligent system paradigm for achieving Artificial General Intelligence (AGI)<n>Recent breakthroughs in Large Language Models (LLMs) and World Models (WMs) have drawn significant attention for embodied AI.
arXiv Detail & Related papers (2025-09-24T11:37:48Z) - Noosemia: toward a Cognitive and Phenomenological Account of Intentionality Attribution in Human-Generative AI Interaction [4.022364531869169]
This paper introduces and formalizes Noosemia, a novel cognitive-phenomenological pattern emerging from human interaction with generative AI systems.<n>We propose a multidisciplinary framework to explain how, under certain conditions, users attribute intentionality, agency, and even interiority to these systems.
arXiv Detail & Related papers (2025-08-04T17:10:08Z) - Knowledge Conceptualization Impacts RAG Efficacy [0.0786430477112975]
We investigate the design of transferable and interpretable neurosymbolic AI systems.<n>Specifically, we focus on a class of systems referred to as ''Agentic Retrieval-Augmented Generation'' systems.
arXiv Detail & Related papers (2025-07-12T20:10:26Z) - A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure [0.0]
Epistemic injustice related to AI is a growing concern.<n>In relation to machine learning models, injustice can have a diverse range of sources.<n>I argue that this injustice the automation of 'epistemicide', the injustice done to agents in their capacity for collective sense-making.
arXiv Detail & Related papers (2025-04-10T07:54:47Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Navigating the Grey Area: How Expressions of Uncertainty and
Overconfidence Affect Language Models [74.07684768317705]
LMs are highly sensitive to markers of certainty in prompts, with accuies varying more than 80%.
We find that expressions of high certainty result in a decrease in accuracy as compared to low expressions; similarly, factive verbs hurt performance, while evidentials benefit performance.
These associations may suggest that LMs is based on observed language use, rather than truly reflecting uncertainty.
arXiv Detail & Related papers (2023-02-26T23:46:29Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.