Formalization of the principles of brain Programming (Brain Principles
Programming)
- URL: http://arxiv.org/abs/2206.03487v1
- Date: Fri, 13 May 2022 13:16:34 GMT
- Title: Formalization of the principles of brain Programming (Brain Principles
Programming)
- Authors: E.E. Vityaev, A.G. Kolonin, A.A. Molchanov
- Abstract summary: "Strong artificial intelligence. On the Approaches to Superintelligence" contains an overview of general artificial intelligence (AGI)
Brain Principles Programming (BPP) is the formalization of universal mechanisms (principles) of the brain work with information.
The paper uses mathematical models and algorithms of the following theories: P.K.Anokhin Theory of Functional Brain Systems, Eleanor Rosch categorization theory, Bob Rehder theory of causal models and "natural" classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the monograph "Strong artificial intelligence. On the Approaches to
Superintelligence" contains an overview of general artificial intelligence
(AGI). As an anthropomorphic research area, it includes Brain Principles
Programming (BPP) -- the formalization of universal mechanisms (principles) of
the brain work with information, which are implemented at all levels of the
organization of nervous tissue. This monograph contains a formalization of
these principles in terms of category theory. However, this formalization is
not enough to develop algorithms for working with information. In this paper,
for the description and modeling of BPP, it is proposed to apply mathematical
models and algorithms developed earlier, which modeling cognitive functions and
base on well-known physiological, psychological and other natural science
theories. The paper uses mathematical models and algorithms of the following
theories: P.K.Anokhin Theory of Functional Brain Systems, Eleanor Rosch
prototypical categorization theory, Bob Rehder theory of causal models and
"natural" classification. As a result, a formalization of BPP is obtained and
computer experiments demonstrating the operation of algorithms are presented.
Related papers
- Topological Representational Similarity Analysis in Brains and Beyond [15.417809900388262]
This thesis introduces Topological RSA (tRSA), a novel framework combining geometric and topological properties of neural representations.
tRSA applies nonlinear monotonic transforms to representational dissimilarities, emphasizing local topology while retaining intermediate-scale geometry.
The resulting geo-topological matrices enable model comparisons robust to noise and individual idiosyncrasies.
arXiv Detail & Related papers (2024-08-21T19:02:00Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Brain Principles Programming [0.3867363075280543]
Brain Principles Programming, BPP, is the formalization of universal mechanisms (principles) of the brain's work with information.
The paper uses mathematical models and algorithms of the following theories.
arXiv Detail & Related papers (2022-02-13T13:41:44Z) - A Mathematical Approach to Constraining Neural Abstraction and the
Mechanisms Needed to Scale to Higher-Order Cognition [0.0]
Artificial intelligence has made great strides in the last decade but still falls short of the human brain, the best-known example of intelligence.
Not much is known of the neural processes that allow the brain to make the leap to achieve so much from so little.
This paper proposes a mathematical approach using graph theory and spectral graph theory, to hypothesize how to constrain these neural clusters of information.
arXiv Detail & Related papers (2021-08-12T02:13:22Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Applications of the Free Energy Principle to Machine Learning and
Neuroscience [0.0]
We explore and apply methods inspired by the free energy principle to two important areas in machine learning and neuroscience.
We focus on predictive coding, a neurobiologically plausible process theory derived from the free energy principle.
Secondly, we study active inference, a neurobiologically grounded account of action through variational message passing.
Finally, we investigate biologically plausible methods of credit assignment in the brain.
arXiv Detail & Related papers (2021-06-30T22:53:03Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.