Towards a Theory of Evolution as Multilevel Learning
- URL: http://arxiv.org/abs/2110.14602v1
- Date: Wed, 27 Oct 2021 17:21:16 GMT
- Title: Towards a Theory of Evolution as Multilevel Learning
- Authors: Vitaly Vanchurin, Yuri I. Wolf, Mikhail I. Katsnelson, Eugene V.
Koonin
- Abstract summary: We apply the theory of learning to physically renormalizable systems in an attempt to develop a theory of biological evolution, including the origin of life, as multilevel learning.
We formulate seven fundamental principles of evolution that appear to be necessary and sufficient to render a universe observable.
We show that these principles entail the major features of biological evolution, including replication and natural selection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We apply the theory of learning to physically renormalizable systems in an
attempt to develop a theory of biological evolution, including the origin of
life, as multilevel learning. We formulate seven fundamental principles of
evolution that appear to be necessary and sufficient to render a universe
observable and show that they entail the major features of biological
evolution, including replication and natural selection. These principles also
follow naturally from the theory of learning. We formulate the theory of
evolution using the mathematical framework of neural networks, which provides
for detailed analysis of evolutionary phenomena. To demonstrate the potential
of the proposed theoretical framework, we derive a generalized version of the
Central Dogma of molecular biology by analyzing the flow of information during
learning (back-propagation) and predicting (forward-propagation) the
environment by evolving organisms. The more complex evolutionary phenomena,
such as major transitions in evolution, in particular, the origin of life, have
to be analyzed in the thermodynamic limit, which is described in detail in the
accompanying paper.
Related papers
- Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery [141.39722070734737]
We propose to enhance the knowledge-driven, abstract reasoning abilities of Large Language Models with the computational strength of simulations.
We introduce Scientific Generative Agent (SGA), a bilevel optimization framework.
We conduct experiments to demonstrate our framework's efficacy in law discovery and molecular design.
arXiv Detail & Related papers (2024-05-16T03:04:10Z) - The Origin and Evolution of Information Handling [0.6963971634605796]
We explain how information control emerged ab initio and how primitive control mechanisms in life might have evolved, becoming increasingly refined.
By describing precisely the primordial transitions in chemistry-based computation, our framework is capable of explaining the above-mentioned gaps.
Being compatible with the free energy principle, we have developed a computational enactivist theoretical framework that could be able to describe from the origin of life to high-level cognition.
arXiv Detail & Related papers (2024-04-05T19:35:38Z) - When large language models meet evolutionary algorithms [48.213640761641926]
Pre-trained large language models (LLMs) have powerful capabilities for generating creative natural text.
Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems.
Motivated by the common collective and directionality of text generation and evolution, this paper illustrates the parallels between LLMs and EAs.
arXiv Detail & Related papers (2024-01-19T05:58:30Z) - Role of Morphogenetic Competency on Evolution [0.0]
In Evolutionary Computation, the inverse relationship (impact of intelligence on evolution) is approached from the perspective of organism level behaviour.
We focus on the intelligence of a minimal model of a system navigating anatomical morphospace.
We evolve populations of artificial embryos using a standard genetic algorithm in silico.
arXiv Detail & Related papers (2023-10-13T11:58:18Z) - The Evolution theory of Learning: From Natural Selection to
Reinforcement Learning [0.0]
reinforcement learning is a powerful tool used in artificial intelligence to develop intelligent agents that learn from their environment.
In recent years, researchers have explored the connections between these two seemingly distinct fields, and have found compelling evidence that they are more closely related than previously thought.
This paper examines these connections and their implications, highlighting the potential for reinforcement learning principles to enhance our understanding of evolution and the role of feedback in evolutionary systems.
arXiv Detail & Related papers (2023-06-16T16:44:14Z) - Towards Predicting Equilibrium Distributions for Molecular Systems with
Deep Learning [60.02391969049972]
We introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems.
DiG employs deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system.
arXiv Detail & Related papers (2023-06-08T17:12:08Z) - Evolved Open-Endedness in Cultural Evolution: A New Dimension in
Open-Ended Evolution Research [0.0]
We argue that cultural evolution should be seen as another real-world example of an open-ended evolutionary system.
We provide an overview of culture as an evolutionary system, highlight the interesting case of human cultural evolution as an open-ended evolutionary system.
arXiv Detail & Related papers (2022-03-24T12:55:23Z) - Complex Evolutional Pattern Learning for Temporal Knowledge Graph
Reasoning [60.94357727688448]
TKG reasoning aims to predict potential facts in the future given the historical KG sequences.
The evolutional patterns are complex in two aspects, length-diversity and time-variability.
We propose a new model, called Complex Evolutional Network (CEN), which uses a length-aware Convolutional Neural Network (CNN) to handle evolutional patterns of different lengths.
arXiv Detail & Related papers (2022-03-15T11:02:55Z) - Thermodynamics of Evolution and the Origin of Life [0.0]
We outline a phenomenological theory of evolution and origin of life by combining the formalism of classical thermodynamics with a statistical description of learning.
We show that, within this thermodynamics framework, major transitions in evolution, such as the transition from an ensemble of molecules to an ensemble of organisms, that is, the origin of life, can be modeled.
arXiv Detail & Related papers (2021-10-28T12:27:33Z) - Embodied Intelligence via Learning and Evolution [92.26791530545479]
We show that environmental complexity fosters the evolution of morphological intelligence.
We also show that evolution rapidly selects morphologies that learn faster.
Our experiments suggest a mechanistic basis for both the Baldwin effect and the emergence of morphological intelligence.
arXiv Detail & Related papers (2021-02-03T18:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.