Selecting for Selection: Learning To Balance Adaptive and Diversifying
Pressures in Evolutionary Search
- URL: http://arxiv.org/abs/2106.09153v1
- Date: Wed, 16 Jun 2021 22:11:27 GMT
- Title: Selecting for Selection: Learning To Balance Adaptive and Diversifying
Pressures in Evolutionary Search
- Authors: Kevin Frans, L.B. Soros, Olaf Witkowski
- Abstract summary: This paper introduces Sel4Sel, an algorithm that searches for high-performing neural-network-based selection functions through a meta-evolutionary loop.
Analysis of the strongest Sel4Sel networks reveals a general tendency to favor highly novel individuals early on, with a gradual shift towards fitness-based selection as deceptive local optima are bypassed.
- Score: 0.5156484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by natural evolution, evolutionary search algorithms have proven
remarkably capable due to their dual abilities to radiantly explore through
diverse populations and to converge to adaptive pressures. A large part of this
behavior comes from the selection function of an evolutionary algorithm, which
is a metric for deciding which individuals survive to the next generation. In
deceptive or hard-to-search fitness landscapes, greedy selection often fails,
thus it is critical that selection functions strike the correct balance between
gradient-exploiting adaptation and exploratory diversification. This paper
introduces Sel4Sel, or Selecting for Selection, an algorithm that searches for
high-performing neural-network-based selection functions through a
meta-evolutionary loop. Results on three distinct bitstring domains indicate
that Sel4Sel networks consistently match or exceed the performance of both
fitness-based selection and benchmarks explicitly designed to encourage
diversity. Analysis of the strongest Sel4Sel networks reveals a general
tendency to favor highly novel individuals early on, with a gradual shift
towards fitness-based selection as deceptive local optima are bypassed.
Related papers
- Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - On Evolvability and Behavior Landscapes in Neuroevolutionary Divergent
Search [0.0]
Evolvability refers to the ability of an individual genotype to produce offspring with mutually diverse phenotypes.
Recent research has demonstrated that divergent search methods promote evolvability by implicitly creating selective pressure for it.
This paper provides a novel perspective on the relationship between neuroevolutionary divergent search and evolvability.
arXiv Detail & Related papers (2023-06-16T13:46:55Z) - Phylogeny-informed fitness estimation [58.720142291102135]
We propose phylogeny-informed fitness estimation, which exploits a population's phylogeny to estimate fitness evaluations.
Our results indicate that phylogeny-informed fitness estimation can mitigate the drawbacks of down-sampled lexicase.
This work serves as an initial step toward improving evolutionary algorithms by exploiting runtime phylogenetic analysis.
arXiv Detail & Related papers (2023-06-06T19:05:01Z) - When to be critical? Performance and evolvability in different regimes
of neural Ising agents [18.536813548129878]
It has long been hypothesized that operating close to the critical state is beneficial for natural, artificial and their evolutionary systems.
We put this hypothesis to test in a system of evolving foraging agents controlled by neural networks.
Surprisingly, we find that all populations that discover solutions, evolve to be subcritical.
arXiv Detail & Related papers (2023-03-28T17:57:57Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - Population-Based Evolution Optimizes a Meta-Learning Objective [0.6091702876917279]
We propose that meta-learning and adaptive evolvability optimize for high performance after a set of learning iterations.
We demonstrate this claim with a simple evolutionary algorithm, Population-Based Meta Learning.
arXiv Detail & Related papers (2021-03-11T03:45:43Z) - Complexity-based speciation and genotype representation for
neuroevolution [81.21462458089142]
This paper introduces a speciation principle for neuroevolution where evolving networks are grouped into species based on the number of hidden neurons.
The proposed speciation principle is employed in several techniques designed to promote and preserve diversity within species and in the ecosystem as a whole.
arXiv Detail & Related papers (2020-10-11T06:26:56Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z) - Using Neural Networks and Diversifying Differential Evolution for
Dynamic Optimisation [11.228244128564512]
We investigate whether neural networks are competitive and the possibility of integrating them to improve the results.
The results show the significance of the improvement when integrating the neural network and diversity mechanisms depends on the type and the frequency of changes.
arXiv Detail & Related papers (2020-08-10T10:07:43Z) - A Study of Fitness Landscapes for Neuroevolution [4.930887920982693]
We use fitness landscapes to study the dynamics of meta-heuristics.
We also use them to infer useful information about the predictive ability of the method.
The results show that these measures are appropriate for estimating both the optimization power and the generalization ability of the considered neuroevolution configurations.
arXiv Detail & Related papers (2020-01-30T11:53:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.