The General Theory of General Intelligence: A Pragmatic Patternist
Perspective
- URL: http://arxiv.org/abs/2103.15100v2
- Date: Thu, 1 Apr 2021 01:30:34 GMT
- Title: The General Theory of General Intelligence: A Pragmatic Patternist
Perspective
- Authors: Ben Goertzel
- Abstract summary: Review covers underlying philosophies, formalizations of the concept of intelligence, and a proposed high level architecture for AGI systems.
The specifics of human-like cognitive architecture are presented as manifestations of these general principles.
Lessons for practical implementation of advanced AGI in frameworks such as OpenCog Hyperon are briefly considered.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A multi-decade exploration into the theoretical foundations of artificial and
natural general intelligence, which has been expressed in a series of books and
papers and used to guide a series of practical and research-prototype software
systems, is reviewed at a moderate level of detail. The review covers
underlying philosophies (patternist philosophy of mind, foundational
phenomenological and logical ontology), formalizations of the concept of
intelligence, and a proposed high level architecture for AGI systems partly
driven by these formalizations and philosophies. The implementation of specific
cognitive processes such as logical reasoning, program learning, clustering and
attention allocation in the context and language of this high level
architecture is considered, as is the importance of a common (e.g. typed
metagraph based) knowledge representation for enabling "cognitive synergy"
between the various processes. The specifics of human-like cognitive
architecture are presented as manifestations of these general principles, and
key aspects of machine consciousness and machine ethics are also treated in
this context. Lessons for practical implementation of advanced AGI in
frameworks such as OpenCog Hyperon are briefly considered.
Related papers
- What Machine Learning Tells Us About the Mathematical Structure of Concepts [0.0]
The study highlights how each framework provides a distinct mathematical perspective for modeling concepts.
This work emphasizes the importance of interdisciplinary dialogue, aiming to enrich our understanding of the complex relationship between human cognition and artificial intelligence.
arXiv Detail & Related papers (2024-08-28T03:30:22Z) - On the Element-Wise Representation and Reasoning in Zero-Shot Image Recognition: A Systematic Survey [82.49623756124357]
Zero-shot image recognition (ZSIR) aims at empowering models to recognize and reason in unseen domains.
This paper presents a broad review of recent advances in element-wise ZSIR.
We first attempt to integrate the three basic ZSIR tasks of object recognition, compositional recognition, and foundation model-based open-world recognition into a unified element-wise perspective.
arXiv Detail & Related papers (2024-08-09T05:49:21Z) - Foundations and Frontiers of Graph Learning Theory [81.39078977407719]
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm.
This article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models.
arXiv Detail & Related papers (2024-07-03T14:07:41Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - Provable Compositional Generalization for Object-Centric Learning [55.658215686626484]
Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.
We show that autoencoders that satisfy structural assumptions on the decoder and enforce encoder-decoder consistency will learn object-centric representations that provably generalize compositionally.
arXiv Detail & Related papers (2023-10-09T01:18:07Z) - A Probabilistic-Logic based Commonsense Representation Framework for
Modelling Inferences with Multiple Antecedents and Varying Likelihoods [5.87677276882675]
Commonsense knowledge-graphs (CKGs) are important resources towards building machines that can'reason' on text or environmental inputs and make inferences beyond perception.
In this work, we study how commonsense knowledge can be better represented by -- (i) utilizing a probabilistic logic representation scheme to model composite inferential knowledge and represent conceptual beliefs with varying likelihoods, and (ii) incorporating a hierarchical conceptual ontology to identify salient concept-relevant relations and organize beliefs at different conceptual levels.
arXiv Detail & Related papers (2022-11-30T08:44:30Z) - Mesarovician Abstract Learning Systems [0.0]
Current approaches to learning hold notions of problem domain and problem task as fundamental precepts.
Mesarovician abstract systems theory is used as a super-structure for learning.
arXiv Detail & Related papers (2021-11-29T18:17:32Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Expressiveness and machine processability of Knowledge Organization
Systems (KOS): An analysis of concepts and relations [0.0]
The potential of both the expressiveness and machine processability of each Knowledge Organization System is extensively regulated by its structural rules.
Ontologies explicitly define diverse types of relations, and are by their nature machine-processable.
arXiv Detail & Related papers (2020-03-11T12:35:52Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.