Novelty Search makes Evolvability Inevitable
- URL: http://arxiv.org/abs/2005.06224v1
- Date: Wed, 13 May 2020 09:32:07 GMT
- Title: Novelty Search makes Evolvability Inevitable
- Authors: Stephane Doncieux (ISIR), Giuseppe Paolo (ISIR), Alban Laflaqui\`ere,
Alexandre Coninx (ISIR)
- Abstract summary: We show that Novelty Search implicitly creates a pressure for high evolvability even in bounded behavior spaces.
We show that, throughout the search, the dynamic evaluation of novelty rewards individuals which are very mobile in the behavior space.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evolvability is an important feature that impacts the ability of evolutionary
processes to find interesting novel solutions and to deal with changing
conditions of the problem to solve. The estimation of evolvability is not
straightforward and is generally too expensive to be directly used as selective
pressure in the evolutionary process. Indirectly promoting evolvability as a
side effect of other easier and faster to compute selection pressures would
thus be advantageous. In an unbounded behavior space, it has already been shown
that evolvable individuals naturally appear and tend to be selected as they are
more likely to invade empty behavior niches. Evolvability is thus a natural
byproduct of the search in this context. However, practical agents and
environments often impose limits on the reach-able behavior space. How do these
boundaries impact evolvability? In this context, can evolvability still be
promoted without explicitly rewarding it? We show that Novelty Search
implicitly creates a pressure for high evolvability even in bounded behavior
spaces, and explore the reasons for such a behavior. More precisely we show
that, throughout the search, the dynamic evaluation of novelty rewards
individuals which are very mobile in the behavior space, which in turn promotes
evolvability.
Related papers
- Evolving choice hysteresis in reinforcement learning: comparing the adaptive value of positivity bias and gradual perseveration [0.0]
We show that positivity bias is evolutionary stable in many situations, while the emergence of gradual perseveration is less systematic and robust.
Our results illustrate that biases can be adaptive and selected by evolution, in an environment-specific manner.
arXiv Detail & Related papers (2024-10-25T09:47:31Z) - Agent Alignment in Evolving Social Norms [65.45423591744434]
We propose an evolutionary framework for agent evolution and alignment, named EvolutionaryAgent.
In an environment where social norms continuously evolve, agents better adapted to the current social norms will have a higher probability of survival and proliferation.
We show that EvolutionaryAgent can align progressively better with the evolving social norms while maintaining its proficiency in general tasks.
arXiv Detail & Related papers (2024-01-09T15:44:44Z) - Role of Morphogenetic Competency on Evolution [0.0]
In Evolutionary Computation, the inverse relationship (impact of intelligence on evolution) is approached from the perspective of organism level behaviour.
We focus on the intelligence of a minimal model of a system navigating anatomical morphospace.
We evolve populations of artificial embryos using a standard genetic algorithm in silico.
arXiv Detail & Related papers (2023-10-13T11:58:18Z) - On Evolvability and Behavior Landscapes in Neuroevolutionary Divergent
Search [0.0]
Evolvability refers to the ability of an individual genotype to produce offspring with mutually diverse phenotypes.
Recent research has demonstrated that divergent search methods promote evolvability by implicitly creating selective pressure for it.
This paper provides a novel perspective on the relationship between neuroevolutionary divergent search and evolvability.
arXiv Detail & Related papers (2023-06-16T13:46:55Z) - When to be critical? Performance and evolvability in different regimes
of neural Ising agents [18.536813548129878]
It has long been hypothesized that operating close to the critical state is beneficial for natural, artificial and their evolutionary systems.
We put this hypothesis to test in a system of evolving foraging agents controlled by neural networks.
Surprisingly, we find that all populations that discover solutions, evolve to be subcritical.
arXiv Detail & Related papers (2023-03-28T17:57:57Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z) - Adaptive Rational Activations to Boost Deep Reinforcement Learning [68.10769262901003]
We motivate why rationals are suitable for adaptable activation functions and why their inclusion into neural networks is crucial.
We demonstrate that equipping popular algorithms with (recurrent-)rational activations leads to consistent improvements on Atari games.
arXiv Detail & Related papers (2021-02-18T14:53:12Z) - Embodied Intelligence via Learning and Evolution [92.26791530545479]
We show that environmental complexity fosters the evolution of morphological intelligence.
We also show that evolution rapidly selects morphologies that learn faster.
Our experiments suggest a mechanistic basis for both the Baldwin effect and the emergence of morphological intelligence.
arXiv Detail & Related papers (2021-02-03T18:58:31Z) - Ecological Reinforcement Learning [76.9893572776141]
We study the kinds of environment properties that can make learning under such conditions easier.
understanding how properties of the environment impact the performance of reinforcement learning agents can help us to structure our tasks in ways that make learning tractable.
arXiv Detail & Related papers (2020-06-22T17:55:03Z) - Mimicking Evolution with Reinforcement Learning [10.35437633064506]
We argue that the path to developing artificial human-like-intelligence will pass through mimicking the evolutionary process in a nature-like simulation.
This work proposes Evolution via Evolutionary Reward (EvER) that allows learning to single-handedly drive the search for policies with increasingly evolutionary fitness.
arXiv Detail & Related papers (2020-03-31T18:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.