Emergent Cognitive Convergence via Implementation: A Structured Loop Reflecting Four Theories of Mind (A Position Paper)
- URL: http://arxiv.org/abs/2507.16184v1
- Date: Tue, 22 Jul 2025 02:54:45 GMT
- Title: Emergent Cognitive Convergence via Implementation: A Structured Loop Reflecting Four Theories of Mind (A Position Paper)
- Authors: Myung Ho Kim,
- Abstract summary: We report the discovery of a structural convergence across four influential theories of mind.<n>This convergence occurred within a practical AI agent architecture called Agentic Flow.<n>We introduce PEACE as a descriptive meta-architecture that captures design-level regularities observed in Agentic Flow.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We report the discovery of a structural convergence across four influential theories of mind: Kahneman's dual-system theory, Friston's predictive processing, Minsky's society of mind, and Clark's extended mind-emerging unintentionally within a practical AI agent architecture called Agentic Flow. Designed to address limitations in large language models (LLMs), Agentic Flow comprises five interdependent modules such as Retrieval, Cognition, Control, Memory, and Action arranged in a recurrent cognitive loop. Although originally inspired only by Minsky and Clark, the system's structure retrospectively aligns with computational motifs found in all four theories, including predictive modeling, associative recall, and error-sensitive control. To assess this convergence, we conducted comparative experiments with baseline LLM agents on multi-step reasoning tasks. The structured agent achieved 95.8% task success and exhibited strong constraint adherence, while the baseline system succeeded 62.3% of the time. These results were not aimed at proving superiority, but at illustrating how theoretical structures may emerge through practical design choices rather than top-down theory. We introduce PEACE as a descriptive meta-architecture that captures design-level regularities observed in Agentic Flow. Not intended as a new theory, PEACE provides a shared vocabulary for understanding architectures shaped by real-world implementation demands. This paper should be read as a position paper - an exploratory reflection on how implementation can surface latent structural echoes of cognitive theory, without asserting theoretical unification.
Related papers
- ProtoReasoning: Prototypes as the Foundation for Generalizable Reasoning in LLMs [54.154593699263074]
ProtoReasoning is a framework that enhances the reasoning ability of Large Reasoning Models.<n>ProtoReasoning transforms problems into corresponding prototype representations.<n>ProtoReasoning achieves 4.7% improvement over baseline models on logical reasoning.
arXiv Detail & Related papers (2025-06-18T07:44:09Z) - On the Fundamental Impossibility of Hallucination Control in Large Language Models [0.0]
This paper establishes a fundamental impossibility theorem: no LLM capable performing non-trivial knowledge aggregation can simultaneously achieve truthful (internally consistent) knowledge representation.<n>This impossibility is not an engineering limitation but arises from the mathematical structure of information aggregation itself.<n>By demonstrating that hallucination and imagination are mathematically identical phenomena-grounded in the necessary violation of information conservation, this paper offers a principled foundation for managing these behaviors in advanced AI systems.
arXiv Detail & Related papers (2025-06-04T23:28:39Z) - The Unified Cognitive Consciousness Theory for Language Models: Anchoring Semantics, Thresholds of Activation, and Emergent Reasoning [2.0800882594868293]
Large language models (LLMs) are vast repositories of latent patterns, but without structured guidance, they lack explicit reasoning, semantic grounding, and goal-directed intelligence.<n>We propose Unified Cognitive Consciousness Theory (UCCT), a unified model that reinterprets LLMs as unconscious substrates requiring external mechanisms, few-shot prompting, RAG, fine-tuning, and multi-agent reasoning.
arXiv Detail & Related papers (2025-06-02T18:12:43Z) - Theoretical Foundations for Semantic Cognition in Artificial Intelligence [0.0]
monograph presents a modular cognitive architecture for artificial intelligence grounded in the formal modeling of belief as structured semantic state.<n> Belief states are defined as dynamic ensembles of linguistic expressions embedded within a navigable manifold, where operators enable assimilation, abstraction, nullification, memory, and introspection.
arXiv Detail & Related papers (2025-04-29T23:10:07Z) - Cognitive Silicon: An Architectural Blueprint for Post-Industrial Computing Systems [0.0]
This paper presents a hypothetical full-stack architectural framework projected toward 2035, exploring a possible trajectory for cognitive computing system design.<n>The proposed architecture would integrate symbolic scaffolding, governed memory, runtime moral coherence, and alignment-aware execution across silicon-to-semantics layers.
arXiv Detail & Related papers (2025-04-23T11:24:30Z) - Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models? [68.72260770171212]
We propose a paradigm of Self-structured Chain of Thought (SCoT), which is composed of minimal semantic atomic steps.<n>Our method can not only generate cognitive CoT structures for various complex tasks but also mitigates the phenomenon of overthinking.<n>We conduct extensive experiments to show that the proposed AtomThink significantly improves the performance of baseline MLLMs.
arXiv Detail & Related papers (2025-03-08T15:23:47Z) - Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models [76.6028674686018]
We introduce thought-tracing, an inference-time reasoning algorithm to trace the mental states of agents.<n>Our algorithm is modeled after the Bayesian theory-of-mind framework.<n>We evaluate thought-tracing on diverse theory-of-mind benchmarks, demonstrating significant performance improvements.
arXiv Detail & Related papers (2025-02-17T15:08:50Z) - AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning [68.65389926175506]
We propose a novel paradigm of Self-structured Chain of Thought (SCoT)<n>Our method can not only generate cognitive CoT structures for various complex tasks but also mitigates the phenomena of overthinking for easier tasks.<n>We conduct extensive experiments to show that the proposed AtomThink significantly improves the performance of baseline MLLMs.
arXiv Detail & Related papers (2024-11-18T11:54:58Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.