Noosemia: toward a Cognitive and Phenomenological Account of Intentionality Attribution in Human-Generative AI Interaction
- URL: http://arxiv.org/abs/2508.02622v1
- Date: Mon, 04 Aug 2025 17:10:08 GMT
- Title: Noosemia: toward a Cognitive and Phenomenological Account of Intentionality Attribution in Human-Generative AI Interaction
- Authors: Enrico De Santis, Antonello Rizzi,
- Abstract summary: This paper introduces and formalizes Noosemia, a novel cognitive-phenomenological phenomenon emerging from human interaction with generative AI systems.<n>We propose a multidisciplinary framework to explain how, under certain conditions, users attribute intentionality, agency, and even interiority to these systems.
- Score: 4.022364531869169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces and formalizes Noosemia, a novel cognitive-phenomenological phenomenon emerging from human interaction with generative AI systems, particularly those enabling dialogic or multimodal exchanges. We propose a multidisciplinary framework to explain how, under certain conditions, users attribute intentionality, agency, and even interiority to these systems - a process grounded not in physical resemblance, but in linguistic performance, epistemic opacity, and emergent technological complexity. By linking an LLM declination of meaning holism to our technical notion of the LLM Contextual Cognitive Field, we clarify how LLMs construct meaning relationally and how coherence and a simulacrum of agency arise at the human-AI interface. The analysis situates noosemia alongside pareidolia, animism, the intentional stance and the uncanny valley, distinguishing its unique characteristics. We also introduce a-noosemia to describe the phenomenological withdrawal of such projections. The paper concludes with reflections on the broader philosophical, epistemological, and social implications of noosemic dynamics and directions for future research.
Related papers
- In Dialogue with Intelligence: Rethinking Large Language Models as Collective Knowledge [2.50194939587674]
Large Language Models (LLMs) are typically analysed through architectural, behavioural, or training-data lenses.<n>This article offers a theoretical and experiential re-framing: LLMs as dynamic instantiations of Collective human Knowledge (CK)<n>I examine emergent dialogue patterns, the implications of fine-tuning, and the notion of co-augmentation: mutual enhancement between human and machine cognition.
arXiv Detail & Related papers (2025-05-28T18:36:00Z) - From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning [52.32745233116143]
Humans organize knowledge into compact categories through semantic compression.<n>Large Language Models (LLMs) demonstrate remarkable linguistic abilities.<n>But whether their internal representations strike a human-like trade-off between compression and semantic fidelity is unclear.
arXiv Detail & Related papers (2025-05-21T16:29:00Z) - A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure [0.0]
Epistemic injustice related to AI is a growing concern.<n>In relation to machine learning models, injustice can have a diverse range of sources.<n>I argue that this injustice the automation of 'epistemicide', the injustice done to agents in their capacity for collective sense-making.
arXiv Detail & Related papers (2025-04-10T07:54:47Z) - Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition [12.269231280154482]
We propose a novel framework that bridges embodied cognition theory and agent systems.<n>We will be able to create a neurosymbolic system that grounds the agent's understanding in fundamental conceptual structures.
arXiv Detail & Related papers (2025-03-31T14:01:39Z) - Modeling Arbitrarily Applicable Relational Responding with the Non-Axiomatic Reasoning System: A Machine Psychology Approach [0.0]
We present a novel theoretical approach for modeling AARR within an artificial intelligence framework using the Non-Axiomatic Reasoning System (NARS)<n>We show how key properties of AARR can emerge from the inference rules and memory structures of NARS.<n>Results suggest that AARR can be conceptually captured by suitably designed AI systems.
arXiv Detail & Related papers (2025-03-01T20:37:11Z) - Human-like conceptual representations emerge from language prediction [72.5875173689788]
Large language models (LLMs) trained exclusively through next-token prediction over language data exhibit remarkably human-like behaviors.<n>Are these models developing concepts akin to humans, and if so, how are such concepts represented and organized?<n>Our results demonstrate that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.<n>These findings establish that structured, human-like conceptual representations can naturally emerge from language prediction without real-world grounding.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Emergence of human-like polarization among large language model agents [79.96817421756668]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.<n>Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate polarization and its consequences.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - A Mechanistic Explanatory Strategy for XAI [0.0]
This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems.<n>The findings suggest that pursuing mechanistic explanations can uncover elements that traditional explainability techniques may overlook.
arXiv Detail & Related papers (2024-11-02T18:30:32Z) - Neuropsychology and Explainability of AI: A Distributional Approach to the Relationship Between Activation Similarity of Neural Categories in Synthetic Cognition [0.11235145048383502]
We propose an approach to explainability of artificial neural networks that involves using concepts from human cognitive tokens.
We show that the categorical segment created by a neuron is actually the result of a superposition of categorical sub-dimensions within its input vector space.
arXiv Detail & Related papers (2024-10-23T05:27:09Z) - The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures [0.0]
The OpenAI-o1 model is a transformer-based AI trained with reinforcement learning from human feedback.
We investigate how RLHF influences the model's internal reasoning processes, potentially giving rise to consciousness-like experiences.
Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
arXiv Detail & Related papers (2024-09-18T06:06:13Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Expanding the Role of Affective Phenomena in Multimodal Interaction
Research [57.069159905961214]
We examined over 16,000 papers from selected conferences in multimodal interaction, affective computing, and natural language processing.
We identify 910 affect-related papers and present our analysis of the role of affective phenomena in these papers.
We find limited research on how affect and emotion predictions might be used by AI systems to enhance machine understanding of human social behaviors and cognitive states.
arXiv Detail & Related papers (2023-05-18T09:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.