Exploring Core and Periphery Precepts in Biological and Artificial Intelligence: An Outcome-Based Perspective
- URL: http://arxiv.org/abs/2507.04594v1
- Date: Mon, 07 Jul 2025 01:15:01 GMT
- Title: Exploring Core and Periphery Precepts in Biological and Artificial Intelligence: An Outcome-Based Perspective
- Authors: Niloofar Shadab, Tyler Cody, Alejandro Salado, Taylan G. Topcu, Mohammad Shadab, Peter Beling,
- Abstract summary: We argue that the engineering of general intelligence requires a fresh set of overarching systems principles.<n>We introduce the "core and periphery" principles, a novel conceptual framework rooted in abstract systems theory and the Law of Requisite Variety.<n>We illustrate their applicability to both biological and artificial intelligence systems, bridging abstract theory with real-world implementations.
- Score: 40.2058998065435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Engineering methodologies predominantly revolve around established principles of decomposition and recomposition. These principles involve partitioning inputs and outputs at the component level, ensuring that the properties of individual components are preserved upon composition. However, this view does not transfer well to intelligent systems, particularly when addressing the scaling of intelligence as a system property. Our prior research contends that the engineering of general intelligence necessitates a fresh set of overarching systems principles. As a result, we introduced the "core and periphery" principles, a novel conceptual framework rooted in abstract systems theory and the Law of Requisite Variety. In this paper, we assert that these abstract concepts hold practical significance. Through empirical evidence, we illustrate their applicability to both biological and artificial intelligence systems, bridging abstract theory with real-world implementations. Then, we expand on our previous theoretical framework by mathematically defining core-dominant vs periphery-dominant systems.
Related papers
- A Systems-Theoretical Formalization of Closed Systems [47.99822253865054]
There is a lack of formalism for some key foundational concepts in systems engineering.
One of the most recently acknowledged deficits is the inadequacy of systems engineering practices for intelligent systems.
arXiv Detail & Related papers (2023-11-16T19:01:01Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Core and Periphery as Closed-System Precepts for Engineering General
Intelligence [62.997667081978825]
It is unclear if an AI system's inputs will be independent of its outputs, and, therefore, if AI systems can be treated as traditional components.
This paper posits that engineering general intelligence requires new general systems precepts, termed the core and periphery.
arXiv Detail & Related papers (2022-08-04T18:20:25Z) - On the Principles of Parsimony and Self-Consistency for the Emergence of
Intelligence [10.951424145477633]
We propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general.
We introduce two fundamental principles, Parsimony and Self-consistency, that we believe to be cornerstones for the emergence of Intelligence.
arXiv Detail & Related papers (2022-07-11T05:06:08Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Mesarovician Abstract Learning Systems [0.0]
Current approaches to learning hold notions of problem domain and problem task as fundamental precepts.
Mesarovician abstract systems theory is used as a super-structure for learning.
arXiv Detail & Related papers (2021-11-29T18:17:32Z) - The General Theory of General Intelligence: A Pragmatic Patternist
Perspective [0.0]
Review covers underlying philosophies, formalizations of the concept of intelligence, and a proposed high level architecture for AGI systems.
The specifics of human-like cognitive architecture are presented as manifestations of these general principles.
Lessons for practical implementation of advanced AGI in frameworks such as OpenCog Hyperon are briefly considered.
arXiv Detail & Related papers (2021-03-28T10:11:25Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.