Mesarovician Abstract Learning Systems
- URL: http://arxiv.org/abs/2111.14766v1
- Date: Mon, 29 Nov 2021 18:17:32 GMT
- Title: Mesarovician Abstract Learning Systems
- Authors: Tyler Cody
- Abstract summary: Current approaches to learning hold notions of problem domain and problem task as fundamental precepts.
Mesarovician abstract systems theory is used as a super-structure for learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The solution methods used to realize artificial general intelligence (AGI)
may not contain the formalism needed to adequately model and characterize AGI.
In particular, current approaches to learning hold notions of problem domain
and problem task as fundamental precepts, but it is hardly apparent that an AGI
encountered in the wild will be discernable into a set of domain-task pairings.
Nor is it apparent that the outcomes of AGI in a system can be well expressed
in terms of domain and task, or as consequences thereof. Thus, there is both a
practical and theoretical use for meta-theories of learning which do not
express themselves explicitly in terms of solution methods. General systems
theory offers such a meta-theory. Herein, Mesarovician abstract systems theory
is used as a super-structure for learning. Abstract learning systems are
formulated. Subsequent elaboration stratifies the assumptions of learning
systems into a hierarchy and considers the hierarchy such stratification
projects onto learning theory. The presented Mesarovician abstract learning
systems theory calls back to the founding motivations of artificial
intelligence research by focusing on the thinking participants directly, in
this case, learning systems, in contrast to the contemporary focus on the
problems thinking participants solve.
Related papers
- A Mechanistic Explanatory Strategy for XAI [0.0]
This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems.
According to the mechanistic approach, the explanation of opaque AI systems involves identifying mechanisms that drive decision-making.
This research suggests that a systematic approach to studying model organization can reveal elements that simpler (or ''more modest'') explainability techniques might miss.
arXiv Detail & Related papers (2024-11-02T18:30:32Z) - Categorical Foundations of Explainable AI: A Unifying Theory [8.637435154170916]
This paper presents the first mathematically rigorous definitions of key XAI notions and processes, using the well-funded formalism of Category theory.
We show that our categorical framework allows to: (i) model existing learning schemes and architectures, (ii) formally define the term "explanation", (iii) establish a theoretical basis for XAI, and (iv) analyze commonly overlooked aspects of explaining methods.
arXiv Detail & Related papers (2023-04-27T11:10:16Z) - Core and Periphery as Closed-System Precepts for Engineering General
Intelligence [62.997667081978825]
It is unclear if an AI system's inputs will be independent of its outputs, and, therefore, if AI systems can be treated as traditional components.
This paper posits that engineering general intelligence requires new general systems precepts, termed the core and periphery.
arXiv Detail & Related papers (2022-08-04T18:20:25Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - The General Theory of General Intelligence: A Pragmatic Patternist
Perspective [0.0]
Review covers underlying philosophies, formalizations of the concept of intelligence, and a proposed high level architecture for AGI systems.
The specifics of human-like cognitive architecture are presented as manifestations of these general principles.
Lessons for practical implementation of advanced AGI in frameworks such as OpenCog Hyperon are briefly considered.
arXiv Detail & Related papers (2021-03-28T10:11:25Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.