Model of human cognition
- URL: http://arxiv.org/abs/2512.00683v1
- Date: Sun, 30 Nov 2025 00:57:32 GMT
- Title: Model of human cognition
- Authors: Wu Yonggang,
- Abstract summary: We propose a neuro-theoretical framework for the emergence of intelligence in systems that is both functionally robust and biologically plausible.<n>The model provides theoretical insights into cognitive processes such as decision-making and problem solving, and a computationally efficient approach for the creation of explainable and generalizable artificial intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of large language models (LLMs) is limited by a lack of explainability, the absence of a unifying theory, and prohibitive operational costs. We propose a neuro-theoretical framework for the emergence of intelligence in systems that is both functionally robust and biologically plausible. The model provides theoretical insights into cognitive processes such as decision-making and problem solving, and a computationally efficient approach for the creation of explainable and generalizable artificial intelligence.
Related papers
- Foundations of Artificial Intelligence Frameworks: Notion and Limits of AGI [0.0]
We argue that artificial general intelligence cannot emerge from current neural network paradigms regardless of scale.<n>We propose a framework distinguishing existential facilities (computational substrate) from architectural organization.
arXiv Detail & Related papers (2025-11-23T16:18:13Z) - The Universal Landscape of Human Reasoning [60.72403709545137]
We introduce Information Flow Tracking (IF-Track) to quantify information entropy and gain at each reasoning step.<n>We show that IF-Track captures essential reasoning features, identifies systematic error patterns, and characterizes individual differences.<n>This approach establishes a quantitative bridge between theory and measurement, offering mechanistic insights into the architecture of reasoning.
arXiv Detail & Related papers (2025-10-24T16:26:36Z) - Model-Grounded Symbolic Artificial Intelligence Systems Learning and Reasoning with Model-Grounded Symbolic Artificial Intelligence Systems [7.000073566770884]
Neurosymbolic artificial intelligence (AI) systems combine neural network and classical symbolic AI mechanisms.<n>We develop novel learning and reasoning approaches that preserve structural similarities to traditional learning and reasoning paradigms.
arXiv Detail & Related papers (2025-07-14T01:34:05Z) - Nature's Insight: A Novel Framework and Comprehensive Analysis of Agentic Reasoning Through the Lens of Neuroscience [11.174550573411008]
We propose a novel neuroscience-inspired framework for agentic reasoning.<n>We apply this framework to systematically classify and analyze existing AI reasoning methods.<n>We propose new neural-inspired reasoning methods, analogous to chain-of-thought prompting.
arXiv Detail & Related papers (2025-05-07T14:25:46Z) - Computational Irreducibility as the Foundation of Agency: A Formal Model Connecting Undecidability to Autonomous Behavior in Complex Systems [0.0]
we establish precise mathematical connections, proving that for any truly autonomous system, questions about its future behavior are fundamentally undecidable.<n>The findings have significant implications for artificial intelligence, biological modeling, and philosophical concepts like free will.
arXiv Detail & Related papers (2025-05-05T21:24:50Z) - Causal Abstraction in Model Interpretability: A Compact Survey [5.963324728136442]
causal abstraction provides a principled approach to understanding and explaining the causal mechanisms underlying model behavior.
This survey paper delves into the realm of causal abstraction, examining its theoretical foundations, practical applications, and implications for the field of model interpretability.
arXiv Detail & Related papers (2024-10-26T12:24:28Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.<n>We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.<n>We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption [0.2209921757303168]
We propose a novel program of reasoning for artificial intelligence (AI)
We show that AIs manifest an adaptive balancing of precision and efficiency, consistent with principles of resource-rational human cognition.
Our findings reveal a nuanced picture of AI cognition, where trade-offs between resources and objectives lead to the emulation of biological systems.
arXiv Detail & Related papers (2024-03-14T13:53:05Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.