Foundations of Artificial Intelligence Frameworks: Notion and Limits of AGI
- URL: http://arxiv.org/abs/2511.18517v1
- Date: Sun, 23 Nov 2025 16:18:13 GMT
- Title: Foundations of Artificial Intelligence Frameworks: Notion and Limits of AGI
- Authors: Khanh Gia Bui,
- Abstract summary: We argue that artificial general intelligence cannot emerge from current neural network paradigms regardless of scale.<n>We propose a framework distinguishing existential facilities (computational substrate) from architectural organization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Within the limited scope of this paper, we argue that artificial general intelligence cannot emerge from current neural network paradigms regardless of scale, nor is such an approach healthy for the field at present. Drawing on various notions, discussions, present-day developments and observations, current debates and critiques, experiments, and so on in between philosophy, including the Chinese Room Argument and Gödelian argument, neuroscientific ideas, computer science, the theoretical consideration of artificial intelligence, and learning theory, we address conceptually that neural networks are architecturally insufficient for genuine understanding. They operate as static function approximators of a limited encoding framework - a 'sophisticated sponge' exhibiting complex behaviours without structural richness that constitute intelligence. We critique the theoretical foundations the field relies on and created of recent times; for example, an interesting heuristic as neural scaling law (as an example, arXiv:2001.08361 ) made prominent in a wrong way of interpretation, The Universal Approximation Theorem addresses the wrong level of abstraction and, in parts, partially, the question of current architectures lacking dynamic restructuring capabilities. We propose a framework distinguishing existential facilities (computational substrate) from architectural organization (interpretive structures), and outline principles for what genuine machine intelligence would require, and furthermore, a conceptual method of structuralizing the richer framework on which the principle of neural network system takes hold.
Related papers
- Mind Meets Space: Rethinking Agentic Spatial Intelligence from a Neuroscience-inspired Perspective [53.556348738917166]
Recent advances in agentic AI have led to systems capable of autonomous task execution and language-based reasoning.<n>Human spatial intelligence, rooted in integrated multisensory perception, spatial memory, and cognitive maps, enables flexible, context-aware decision-making in unstructured environments.
arXiv Detail & Related papers (2025-09-11T05:23:22Z) - Nature's Insight: A Novel Framework and Comprehensive Analysis of Agentic Reasoning Through the Lens of Neuroscience [11.174550573411008]
We propose a novel neuroscience-inspired framework for agentic reasoning.<n>We apply this framework to systematically classify and analyze existing AI reasoning methods.<n>We propose new neural-inspired reasoning methods, analogous to chain-of-thought prompting.
arXiv Detail & Related papers (2025-05-07T14:25:46Z) - Function Alignment: A New Theory of Mind and Intelligence, Part I: Foundations [0.0]
This paper introduces function alignment, a novel theory of mind and intelligence.<n>It explicitly models how meaning, interpretation, and analogy emerge from interactions among layered representations.<n>It bridges disciplines often kept apart, linking computational architecture, psychological theory, and even contemplative traditions such as Zen.
arXiv Detail & Related papers (2025-03-27T02:59:01Z) - Standard Neural Computation Alone Is Insufficient for Logical Intelligence [3.230778132936486]
We argue that standard neural layers must be fundamentally rethought to integrate logical reasoning.<n>We advocate for Logical Neural Units (LNUs)-modular components that embed differentiable approximations of logical operations.
arXiv Detail & Related papers (2025-02-04T09:07:45Z) - Foundations and Frontiers of Graph Learning Theory [81.39078977407719]
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm.
This article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models.
arXiv Detail & Related papers (2024-07-03T14:07:41Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Rejecting Cognitivism: Computational Phenomenology for Deep Learning [5.070542698701158]
We propose a non-representationalist framework for deep learning relying on a novel method: computational phenomenology.
We reject the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities.
arXiv Detail & Related papers (2023-02-16T20:05:06Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - A Chain Graph Interpretation of Real-World Neural Networks [58.78692706974121]
We propose an alternative interpretation that identifies NNs as chain graphs (CGs) and feed-forward as an approximate inference procedure.
The CG interpretation specifies the nature of each NN component within the rich theoretical framework of probabilistic graphical models.
We demonstrate with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques.
arXiv Detail & Related papers (2020-06-30T14:46:08Z) - A Mathematical Framework for Consciousness in Neural Networks [0.0]
This paper presents a novel mathematical framework for bridging the explanatory gap between consciousness and its physical correlates.<n>We do not claim that qualia are singularities or that singularities "explain" why qualia feel as they do.<n>We establish a framework that recognizes qualia as phenomena inherently beyond reduction to complexity, computation, or information.
arXiv Detail & Related papers (2017-04-04T18:32:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.