Identifying Quantum Structure in AI Language: Evidence for Evolutionary Convergence of Human and Artificial Cognition
- URL: http://arxiv.org/abs/2511.21731v1
- Date: Fri, 21 Nov 2025 08:22:49 GMT
- Title: Identifying Quantum Structure in AI Language: Evidence for Evolutionary Convergence of Human and Artificial Cognition
- Authors: Diederik Aerts, Jonito Aerts Arguëlles, Lester Beltran, Suzette Geriente, Roberto Leporini, Massimiliano Sassoli de Bianchi, Sandro Sozzo,
- Abstract summary: We present the results of cognitive tests on conceptual combinations, performed using specific Large Language Models (LLMs) as test subjects.<n>In the first test, performed with ChatGPT and Gemini, we show that Bell's inequalities are significantly violated, which indicates the presence of 'quantum entanglement' in the tested concepts.<n>In the second test, also performed using ChatGPT and Gemini, we identify the presence of 'Bose-Einstein statistics', rather than the intuitively expected 'Maxwell-Boltzmann statistics', in the distribution of the words contained in large-size texts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the results of cognitive tests on conceptual combinations, performed using specific Large Language Models (LLMs) as test subjects. In the first test, performed with ChatGPT and Gemini, we show that Bell's inequalities are significantly violated, which indicates the presence of 'quantum entanglement' in the tested concepts. In the second test, also performed using ChatGPT and Gemini, we instead identify the presence of 'Bose-Einstein statistics', rather than the intuitively expected 'Maxwell-Boltzmann statistics', in the distribution of the words contained in large-size texts. Interestingly, these findings mirror the results previously obtained in both cognitive tests with human participants and information retrieval tests on large corpora. Taken together, they point to the 'systematic emergence of quantum structures in conceptual-linguistic domains', regardless of whether the cognitive agent is human or artificial. Although LLMs are classified as neural networks for historical reasons, we believe that a more essential form of knowledge organization takes place in the distributive semantic structure of vector spaces built on top of the neural network. It is this meaning-bearing structure that lends itself to a phenomenon of evolutionary convergence between human cognition and language, slowly established through biological evolution, and LLM cognition and language, emerging much more rapidly as a result of self-learning and training. We analyze various aspects and examples that contain evidence supporting the above hypothesis. We also advance a unifying framework that explains the pervasive quantum organization of meaning that we identify.
Related papers
- Toward Cognitive Supersensing in Multimodal Large Language Model [67.15559571626747]
We introduce Cognitive Supersensing, a training paradigm that endows MLLMs with human-like visual imagery capabilities.<n>In experiments, MLLMs trained with Cognitive Supersensing significantly outperform state-of-the-art baselines on CogSense-Bench.<n>We will open-source the CogSense-Bench and our model weights.
arXiv Detail & Related papers (2026-02-02T02:19:50Z) - A Brain-like Synergistic Core in LLMs Drives Behaviour and Learning [50.68188138112555]
We show that large language models spontaneously develop synergistic cores.<n>We find that areas in middle layers exhibit synergistic processing while early and late layers rely on redundancy.<n>This convergence suggests that synergistic information processing is a fundamental property of intelligence.
arXiv Detail & Related papers (2026-01-11T10:48:35Z) - Concept-Guided Interpretability via Neural Chunking [64.6429903327095]
We show that neural networks exhibit patterns in their raw population activity that mirror regularities in the training data.<n>We propose three methods to extract recurring chunks on a neural population level.<n>Our work points to a new direction for interpretability, one that harnesses both cognitive principles and the structure of naturalistic data.
arXiv Detail & Related papers (2025-05-16T13:49:43Z) - Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.<n>We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.<n>We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Revealing emergent human-like conceptual representations from language prediction [90.73285317321312]
Large language models (LLMs) trained solely through next-token prediction on text exhibit strikingly human-like behaviors.<n>Are these models developing concepts akin to those of humans?<n>We found that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Detecting Neurocognitive Disorders through Analyses of Topic Evolution and Cross-modal Consistency in Visual-Stimulated Narratives [83.15653194899126]
Early detection of neurocognitive disorders (NCDs) is crucial for timely intervention and disease management.<n>Current VSN-based NCD detection methods primarily focus on linguistic microstructures closely tied to bottom-up, stimulus-driven cognitive processes.<n>We propose two novel macrostructural approaches: a Dynamic Topic Model (DTM) to track topic evolution over time, and a Text-Image Temporal Alignment Network (TITAN) to measure cross-modal consistency between narrative and visual stimuli.
arXiv Detail & Related papers (2025-01-07T12:16:26Z) - Turing Video-based Cognitive Tests to Handle Entangled Concepts [0.0]
We present the results of an innovative video-based cognitive test on a specific conceptual combination.<n>We show that collected data can be faithfully modeled within a quantum-theoretic framework.<n>We provide a novel explanation for the appearance of entanglement in both physics and cognitive realms.
arXiv Detail & Related papers (2024-09-13T14:30:55Z) - A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language [15.929767234646631]
Increase in data, size, or compute can lead to sudden learning of specific capabilities by a neural network.
"emergence" is a phenomenon often called "emergence"
arXiv Detail & Related papers (2024-08-22T17:44:22Z) - Analysis of Argument Structure Constructions in a Deep Recurrent Language Model [0.0]
We explore the representation and processing of Argument Structure Constructions (ASCs) in a recurrent neural language model.
Our results show that sentence representations form distinct clusters corresponding to the four ASCs across all hidden layers.
This indicates that even a relatively simple, brain-constrained recurrent neural network can effectively differentiate between various construction types.
arXiv Detail & Related papers (2024-08-06T09:27:41Z) - Exploring the LLM Journey from Cognition to Expression with Linear Representations [10.92882688742428]
This paper presents an in-depth examination of the evolution and interplay of cognitive and expressive capabilities in large language models (LLMs)
We define and explore the model's cognitive and expressive capabilities through linear representations across three critical phases: Pretraining, Supervised Fine-Tuning (SFT), and Reinforcement Learning from Human Feedback (RLHF)
Our findings unveil a sequential development pattern, where cognitive abilities are largely established during Pretraining, whereas expressive abilities predominantly advance during SFT and RLHF.
arXiv Detail & Related papers (2024-05-27T08:57:04Z) - Conceptual structure coheres in human cognition but not in large
language models [7.405352374343134]
We show that conceptual structure is robust to differences in culture, language, and method of estimation.
Results highlight an important difference between contemporary large language models and human cognition.
arXiv Detail & Related papers (2023-04-05T21:27:01Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z) - Emergence of Separable Manifolds in Deep Language Representations [26.002842878797765]
Deep neural networks (DNNs) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
Recent studies report considerable similarities between representations extracted from task-optimized DNNs and neural populations in the brain.
DNNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions.
arXiv Detail & Related papers (2020-06-01T17:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.