Computational principles of intelligence: learning and reasoning with
neural networks
- URL: http://arxiv.org/abs/2012.09477v1
- Date: Thu, 17 Dec 2020 10:03:26 GMT
- Title: Computational principles of intelligence: learning and reasoning with
neural networks
- Authors: Abel Torres Montoya
- Abstract summary: This work proposes a novel framework of intelligence based on three principles.
First, the generative and mirroring nature of learned representations of inputs.
Second, a grounded, intrinsically motivated and iterative process for learning, problem solving and imagination.
Third, an ad hoc tuning of the reasoning mechanism over causal compositional representations using inhibition rules.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite significant achievements and current interest in machine learning and
artificial intelligence, the quest for a theory of intelligence, allowing
general and efficient problem solving, has done little progress. This work
tries to contribute in this direction by proposing a novel framework of
intelligence based on three principles. First, the generative and mirroring
nature of learned representations of inputs. Second, a grounded, intrinsically
motivated and iterative process for learning, problem solving and imagination.
Third, an ad hoc tuning of the reasoning mechanism over causal compositional
representations using inhibition rules. Together, those principles create a
systems approach offering interpretability, continuous learning, common sense
and more. This framework is being developed from the following perspectives: as
a general problem solving method, as a human oriented tool and finally, as
model of information processing in the brain.
Related papers
- Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Learning principle and mathematical realization of the learning
mechanism in the brain [0.0]
We call it learning principle, and it follows that all learning is equivalent to estimating the probability of input data.
We show that conventional supervised learning is equivalent to estimating conditional probabilities, and succeeded in making supervised learning more effective and generalized.
We propose a new method of defining the values of estimated probability using differentiation, and show that unsupervised learning can be performed on arbitrary dataset without any prior knowledge.
arXiv Detail & Related papers (2023-11-22T12:08:01Z) - Non-equilibrium physics: from spin glasses to machine and neural
learning [0.0]
Disordered many-body systems exhibit a wide range of emergent phenomena across different scales.
We aim to characterize such emergent intelligence in disordered systems through statistical physics.
We uncover relationships between learning mechanisms and physical dynamics that could serve as guiding principles for designing intelligent systems.
arXiv Detail & Related papers (2023-08-03T04:56:47Z) - Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning [47.96987739801807]
We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
arXiv Detail & Related papers (2023-02-11T15:15:41Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Towards Benchmarking Explainable Artificial Intelligence Methods [0.0]
We use philosophy of science theories as an analytical lens with the goal of revealing, what can be expected, and more importantly, not expected, from methods that aim to explain decisions promoted by a neural network.
By conducting a case study we investigate a selection of explainability method's performance over two mundane domains, animals and headgear.
We lay bare that the usefulness of these methods relies on human domain knowledge and our ability to understand, generalise and reason.
arXiv Detail & Related papers (2022-08-25T14:28:30Z) - A World-Self Model Towards Understanding Intelligence [0.0]
We will compare human and artificial intelligence, and propose that a certain aspect of human intelligence is the key to connect perception and cognition.
We will present the broader idea of "concept", the principles and mathematical frameworks of the new model World-Self Model (WSM) of intelligence, and finally an unified general framework of intelligence based on WSM.
arXiv Detail & Related papers (2022-03-25T16:42:23Z) - HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving [104.79156980475686]
Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
arXiv Detail & Related papers (2021-02-22T20:37:01Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.