Evolution-Bootstrapped Simulation: Artificial or Human Intelligence:
Which Came First?
- URL: http://arxiv.org/abs/2402.00030v1
- Date: Sat, 6 Jan 2024 21:06:58 GMT
- Title: Evolution-Bootstrapped Simulation: Artificial or Human Intelligence:
Which Came First?
- Authors: Paul Alexander Bilokon
- Abstract summary: In a world driven by evolution by natural selection, would neural networks or humans be likely to evolve first?
We find neural networks to be significantly simpler than humans.
It is unnecessary for any complex human-made equipment to exist for there to be neural networks.
- Score: 0.9790236766474201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans have created artificial intelligence (AI), not the other way around.
This statement is deceptively obvious. In this note, we decided to challenge
this statement as a small, lighthearted Gedankenexperiment. We ask a simple
question: in a world driven by evolution by natural selection, would neural
networks or humans be likely to evolve first? We compare the
Solomonoff--Kolmogorov--Chaitin complexity of the two and find neural networks
(even LLMs) to be significantly simpler than humans. Further, we claim that it
is unnecessary for any complex human-made equipment to exist for there to be
neural networks. Neural networks may have evolved as naturally occurring
objects before humans did as a form of chemical reaction-based or enzyme-based
computation. Now that we know that neural networks can pass the Turing test and
suspect that they may be capable of superintelligence, we ask whether the
natural evolution of neural networks could lead from pure evolution by natural
selection to what we call evolution-bootstrapped simulation. The evolution of
neural networks does not involve irreducible complexity; would easily allow
irreducible complexity to exist in the evolution-bootstrapped simulation; is a
falsifiable scientific hypothesis; and is independent of / orthogonal to the
issue of intelligent design.
Related papers
- A theory of neural emulators [0.0]
A central goal in neuroscience is to provide explanations for how animal nervous systems can generate actions and cognitive states such as consciousness.
We propose emulator theory (ET) and neural emulators as circuit- and scale-independent predictive models of biological brain activity.
arXiv Detail & Related papers (2024-05-22T07:12:03Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - The Nature of Intelligence [0.0]
The essence of intelligence commonly represented by both humans and AI is unknown.
We show that the nature of intelligence is a series of mathematically functional processes that minimize system entropy.
This essay should be a starting point for a deeper understanding of the universe and us as human beings.
arXiv Detail & Related papers (2023-07-20T23:11:59Z) - Genes in Intelligent Agents [45.93363823594323]
Animals are born with some intelligence encoded in their genes, but machines lack such intelligence and learn from scratch.
Inspired by the genes of animals, we define the genes'' of machines named as the learngenes'' and propose the Genetic Reinforcement Learning (GRL)
GRL is a computational framework that simulates the evolution of organisms in reinforcement learning (RL) and leverages the learngenes to learn and evolve the intelligence agents.
arXiv Detail & Related papers (2023-06-17T01:24:11Z) - Towards the Neuroevolution of Low-level Artificial General Intelligence [5.2611228017034435]
We argue that the search for Artificial General Intelligence (AGI) should start from a much lower level than human-level intelligence.
Our hypothesis is that learning occurs through sensory feedback when an agent acts in an environment.
We evaluate a method to evolve a biologically-inspired artificial neural network that learns from environment reactions.
arXiv Detail & Related papers (2022-07-27T15:30:50Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Embodied Intelligence via Learning and Evolution [92.26791530545479]
We show that environmental complexity fosters the evolution of morphological intelligence.
We also show that evolution rapidly selects morphologies that learn faster.
Our experiments suggest a mechanistic basis for both the Baldwin effect and the emergence of morphological intelligence.
arXiv Detail & Related papers (2021-02-03T18:58:31Z) - Applying Deutsch's concept of good explanations to artificial
intelligence and neuroscience -- an initial exploration [0.0]
We investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning.
We look at what role hard-tovary explanations play in intelligence by looking at the human brain.
arXiv Detail & Related papers (2020-12-16T23:23:22Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Recursion, evolution and conscious self [0.0]
We study a learning theory which is roughly automatic, that is, it does not require but a minimum of initial programming.
The conclusions agree with scientific findings in both biology and neuroscience.
arXiv Detail & Related papers (2020-01-14T11:04:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.