Recursion, evolution and conscious self
- URL: http://arxiv.org/abs/2001.11825v4
- Date: Mon, 24 Apr 2023 14:59:21 GMT
- Title: Recursion, evolution and conscious self
- Authors: A.D. Arvanitakis
- Abstract summary: We study a learning theory which is roughly automatic, that is, it does not require but a minimum of initial programming.
The conclusions agree with scientific findings in both biology and neuroscience.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce and study a learning theory which is roughly automatic, that is,
it does not require but a minimum of initial programming, and is based on the
potential computational phenomenon of self-reference, (i.e. the potential
ability of an algorithm to have its program as an input).
The conclusions agree with scientific findings in both biology and
neuroscience and provide a plethora of explanations both (in conjunction with
Darwinism) about evolution, as well as for the functionality and learning
capabilities of human brain, (most importantly), as we perceive them in
ourselves.
Related papers
- Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Evolution-Bootstrapped Simulation: Artificial or Human Intelligence:
Which Came First? [0.9790236766474201]
In a world driven by evolution by natural selection, would neural networks or humans be likely to evolve first?
We find neural networks to be significantly simpler than humans.
It is unnecessary for any complex human-made equipment to exist for there to be neural networks.
arXiv Detail & Related papers (2024-01-06T21:06:58Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Applications of the Free Energy Principle to Machine Learning and
Neuroscience [0.0]
We explore and apply methods inspired by the free energy principle to two important areas in machine learning and neuroscience.
We focus on predictive coding, a neurobiologically plausible process theory derived from the free energy principle.
Secondly, we study active inference, a neurobiologically grounded account of action through variational message passing.
Finally, we investigate biologically plausible methods of credit assignment in the brain.
arXiv Detail & Related papers (2021-06-30T22:53:03Z) - Applying Deutsch's concept of good explanations to artificial
intelligence and neuroscience -- an initial exploration [0.0]
We investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning.
We look at what role hard-tovary explanations play in intelligence by looking at the human brain.
arXiv Detail & Related papers (2020-12-16T23:23:22Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Learning a Deep Generative Model like a Program: the Free Category Prior [2.088583843514496]
We show how our formalism allows neural networks to serve as primitives in probabilistic programs.
We show how our formalism allows neural networks to serve as primitives in probabilistic programs.
arXiv Detail & Related papers (2020-11-22T17:16:17Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.