Mimicking Evolution with Reinforcement Learning
- URL: http://arxiv.org/abs/2004.00048v2
- Date: Wed, 6 May 2020 16:08:43 GMT
- Title: Mimicking Evolution with Reinforcement Learning
- Authors: Jo\~ao P. Abrantes, Arnaldo J. Abrantes, Frans A. Oliehoek
- Abstract summary: We argue that the path to developing artificial human-like-intelligence will pass through mimicking the evolutionary process in a nature-like simulation.
This work proposes Evolution via Evolutionary Reward (EvER) that allows learning to single-handedly drive the search for policies with increasingly evolutionary fitness.
- Score: 10.35437633064506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evolution gave rise to human and animal intelligence here on Earth. We argue
that the path to developing artificial human-like-intelligence will pass
through mimicking the evolutionary process in a nature-like simulation. In
Nature, there are two processes driving the development of the brain: evolution
and learning. Evolution acts slowly, across generations, and amongst other
things, it defines what agents learn by changing their internal reward
function. Learning acts fast, across one's lifetime, and it quickly updates
agents' policy to maximise pleasure and minimise pain. The reward function is
slowly aligned with the fitness function by evolution, however, as agents
evolve the environment and its fitness function also change, increasing the
misalignment between reward and fitness. It is extremely computationally
expensive to replicate these two processes in simulation. This work proposes
Evolution via Evolutionary Reward (EvER) that allows learning to
single-handedly drive the search for policies with increasingly evolutionary
fitness by ensuring the alignment of the reward function with the fitness
function. In this search, EvER makes use of the whole state-action trajectories
that agents go through their lifetime. In contrast, current evolutionary
algorithms discard this information and consequently limit their potential
efficiency at tackling sequential decision problems. We test our algorithm in
two simple bio-inspired environments and show its superiority at generating
more capable agents at surviving and reproducing their genes when compared with
a state-of-the-art evolutionary algorithm.
Related papers
- Evolutionary Automata and Deep Evolutionary Computation [0.38073142980732994]
An evolutionary automaton is an automaton that evolves performing evolutionary computation perhaps using an infinite number of generations.
This also gives the hint to the power of natural evolution that is self-evolving by interactive feedback with the environment.
arXiv Detail & Related papers (2024-11-22T15:31:50Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Role of Morphogenetic Competency on Evolution [0.0]
In Evolutionary Computation, the inverse relationship (impact of intelligence on evolution) is approached from the perspective of organism level behaviour.
We focus on the intelligence of a minimal model of a system navigating anatomical morphospace.
We evolve populations of artificial embryos using a standard genetic algorithm in silico.
arXiv Detail & Related papers (2023-10-13T11:58:18Z) - Lamarck's Revenge: Inheritance of Learned Traits Can Make Robot
Evolution Better [2.884244918665901]
We investigate the question What if the 18th-century biologist Lamarck was not completely wrong and individual traits learned during a lifetime could be passed on to offspring through inheritance?''
Within this framework, we compare a Lamarckian system, where learned bits of the brain are inheritable, with a Darwinian system, where they are not.
arXiv Detail & Related papers (2023-09-22T15:29:15Z) - Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots [58.720142291102135]
Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
arXiv Detail & Related papers (2023-02-20T04:52:24Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z) - Epigenetic opportunities for Evolutionary Computation [0.0]
Evolutionary Computation is a group of biologically inspired algorithms used to solve complex optimisation problems.
It can be split into Evolutionary Algorithms, which take inspiration from genetic inheritance, and Swarm Intelligence algorithms, that take inspiration from cultural inheritance.
This paper breaks down successful bio-inspired algorithms under a contemporary biological framework based on the Extended Evolutionary Synthesis.
arXiv Detail & Related papers (2021-08-10T09:44:53Z) - Task-Agnostic Morphology Evolution [94.97384298872286]
Current approaches that co-adapt morphology and behavior use a specific task's reward as a signal for morphology optimization.
This often requires expensive policy optimization and results in task-dependent morphologies that are not built to generalize.
We propose a new approach, Task-Agnostic Morphology Evolution (TAME), to alleviate both of these issues.
arXiv Detail & Related papers (2021-02-25T18:59:21Z) - Embodied Intelligence via Learning and Evolution [92.26791530545479]
We show that environmental complexity fosters the evolution of morphological intelligence.
We also show that evolution rapidly selects morphologies that learn faster.
Our experiments suggest a mechanistic basis for both the Baldwin effect and the emergence of morphological intelligence.
arXiv Detail & Related papers (2021-02-03T18:58:31Z) - Evolving the Behavior of Machines: From Micro to Macroevolution [4.061135251278186]
Evolution has inspired computer scientists since the advent of computing.
This has led to tools that can evolve complex neural networks for machines.
Modern view of artificial evolution is moving the field away from microevolution to macroevolution.
arXiv Detail & Related papers (2020-12-21T21:35:15Z) - Novelty Search makes Evolvability Inevitable [62.997667081978825]
We show that Novelty Search implicitly creates a pressure for high evolvability even in bounded behavior spaces.
We show that, throughout the search, the dynamic evaluation of novelty rewards individuals which are very mobile in the behavior space.
arXiv Detail & Related papers (2020-05-13T09:32:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.