On the Principles of Parsimony and Self-Consistency for the Emergence of
Intelligence
- URL: http://arxiv.org/abs/2207.04630v1
- Date: Mon, 11 Jul 2022 05:06:08 GMT
- Title: On the Principles of Parsimony and Self-Consistency for the Emergence of
Intelligence
- Authors: Yi Ma and Doris Tsao and Heung-Yeung Shum
- Abstract summary: We propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general.
We introduce two fundamental principles, Parsimony and Self-consistency, that we believe to be cornerstones for the emergence of Intelligence.
- Score: 10.951424145477633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ten years into the revival of deep networks and artificial intelligence, we
propose a theoretical framework that sheds light on understanding deep networks
within a bigger picture of Intelligence in general. We introduce two
fundamental principles, Parsimony and Self-consistency, that we believe to be
cornerstones for the emergence of Intelligence, artificial or natural. While
these two principles have rich classical roots, we argue that they can be
stated anew in entirely measurable and computable ways. More specifically, the
two principles lead to an effective and efficient computational framework,
compressive closed-loop transcription, that unifies and explains the evolution
of modern deep networks and many artificial intelligence practices. While we
mainly use modeling of visual data as an example, we believe the two principles
will unify understanding of broad families of autonomous intelligent systems
and provide a framework for understanding the brain.
Related papers
- Exploring Core and Periphery Precepts in Biological and Artificial Intelligence: An Outcome-Based Perspective [40.2058998065435]
We argue that the engineering of general intelligence requires a fresh set of overarching systems principles.<n>We introduce the "core and periphery" principles, a novel conceptual framework rooted in abstract systems theory and the Law of Requisite Variety.<n>We illustrate their applicability to both biological and artificial intelligence systems, bridging abstract theory with real-world implementations.
arXiv Detail & Related papers (2025-07-07T01:15:01Z) - Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence [59.07578850674114]
Sound deductive reasoning is an indisputably desirable aspect of general intelligence.<n>It is well-documented that even the most advanced frontier systems regularly and consistently falter on easily-solvable reasoning tasks.<n>We argue that their unsound behavior is a consequence of the statistical learning approach powering their development.
arXiv Detail & Related papers (2025-06-30T14:37:50Z) - Continuum-Interaction-Driven Intelligence: Human-Aligned Neural Architecture via Crystallized Reasoning and Fluid Generation [1.5800607910450124]
Current AI systems face challenges including hallucination, unpredictability, and misalignment with human decision-making.
This study proposes a dual-channel intelligent architecture that integrates probabilistic generation (LLMs) with white-box procedural reasoning (chain-of-thought) to construct interpretable, continuously learnable, and human-aligned AI systems.
arXiv Detail & Related papers (2025-04-12T18:15:49Z) - The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence [0.0]
A traditional approach to assessing emerging intelligence in the theory of intelligent systems is based on the similarity, "imitation" of human-like actions and behaviors.
We argue that under some natural assumptions, developing intelligent systems will be able to form their own intents and objectives.
arXiv Detail & Related papers (2024-10-14T13:39:58Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning [47.96987739801807]
We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
arXiv Detail & Related papers (2023-02-11T15:15:41Z) - A Probabilistic-Logic based Commonsense Representation Framework for
Modelling Inferences with Multiple Antecedents and Varying Likelihoods [5.87677276882675]
Commonsense knowledge-graphs (CKGs) are important resources towards building machines that can'reason' on text or environmental inputs and make inferences beyond perception.
In this work, we study how commonsense knowledge can be better represented by -- (i) utilizing a probabilistic logic representation scheme to model composite inferential knowledge and represent conceptual beliefs with varying likelihoods, and (ii) incorporating a hierarchical conceptual ontology to identify salient concept-relevant relations and organize beliefs at different conceptual levels.
arXiv Detail & Related papers (2022-11-30T08:44:30Z) - On the uncertainty principle of neural networks [36.098205818550554]
We show that neural networks are subject to an uncertainty relation, which manifests as a fundamental limitation in their ability to simultaneously achieve high accuracy and robustness against adversarial attacks.
Our findings reveal that the complementarity principle, a cornerstone of quantum physics, applies to neural networks, imposing fundamental limits on their capabilities in simultaneous learning of conjugate features.
arXiv Detail & Related papers (2022-05-03T13:48:12Z) - Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges [50.22269760171131]
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods.
This text is concerned with exposing pre-defined regularities through unified geometric principles.
It provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers.
arXiv Detail & Related papers (2021-04-27T21:09:51Z) - The General Theory of General Intelligence: A Pragmatic Patternist
Perspective [0.0]
Review covers underlying philosophies, formalizations of the concept of intelligence, and a proposed high level architecture for AGI systems.
The specifics of human-like cognitive architecture are presented as manifestations of these general principles.
Lessons for practical implementation of advanced AGI in frameworks such as OpenCog Hyperon are briefly considered.
arXiv Detail & Related papers (2021-03-28T10:11:25Z) - Computational principles of intelligence: learning and reasoning with
neural networks [0.0]
This work proposes a novel framework of intelligence based on three principles.
First, the generative and mirroring nature of learned representations of inputs.
Second, a grounded, intrinsically motivated and iterative process for learning, problem solving and imagination.
Third, an ad hoc tuning of the reasoning mechanism over causal compositional representations using inhibition rules.
arXiv Detail & Related papers (2020-12-17T10:03:26Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A Mathematical Framework for Consciousness in Neural Networks [0.0]
This paper presents a novel mathematical framework for bridging the explanatory gap between consciousness and its physical correlates.
We do not claim that qualia are singularities or that singularities "explain" why qualia feel as they do.
We establish a framework that recognizes qualia as phenomena inherently beyond reduction to complexity, computation, or information.
arXiv Detail & Related papers (2017-04-04T18:32:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.