Self-Referential Quality Diversity Through Differential Map-Elites
- URL: http://arxiv.org/abs/2107.04964v1
- Date: Sun, 11 Jul 2021 04:31:10 GMT
- Title: Self-Referential Quality Diversity Through Differential Map-Elites
- Authors: Tae Jong Choi and Julian Togelius
- Abstract summary: Differential MAP-Elites is a novel algorithm that combines the illumination capacity of computation-MAP-Elites with the continuous-space optimization capacity of Differential Evolution.
The basic MAP-Elites algorithm, introduced for the first time here, is relatively simple in that it simply combines the operators from Differential Evolution with the map structure of Differential-MAP-Elites.
- Score: 5.2508303190856624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential MAP-Elites is a novel algorithm that combines the illumination
capacity of CVT-MAP-Elites with the continuous-space optimization capacity of
Differential Evolution. The algorithm is motivated by observations that
illumination algorithms, and quality-diversity algorithms in general, offer
qualitatively new capabilities and applications for evolutionary computation
yet are in their original versions relatively unsophisticated optimizers. The
basic Differential MAP-Elites algorithm, introduced for the first time here, is
relatively simple in that it simply combines the operators from Differential
Evolution with the map structure of CVT-MAP-Elites. Experiments based on 25
numerical optimization problems suggest that Differential MAP-Elites clearly
outperforms CVT-MAP-Elites, finding better-quality and more diverse solutions.
Related papers
- Evolving Benchmark Functions to Compare Evolutionary Algorithms via Genetic Programming [3.838204385427238]
We use Genetic Programming (GP) to compose new optimization benchmark functions.
We show that the benchmarks generated by GP are able to differentiate algorithms better than human-made benchmark functions.
arXiv Detail & Related papers (2024-03-21T05:42:17Z) - Synergizing Quality-Diversity with Descriptor-Conditioned Reinforcement Learning [4.851070356054758]
Quality-Diversity algorithms are evolutionary methods designed to generate a set of diverse and high-fitness solutions.
As a genetic algorithm, MAP-Elites relies on random mutations, which can become inefficient in high-dimensional search spaces.
We introduce DCRL-MAP-Elites, an extension of DCG-MAP-Elites that utilizes the descriptor-conditioned actor as a generative model.
arXiv Detail & Related papers (2023-12-10T19:53:15Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Polygonal Unadjusted Langevin Algorithms: Creating stable and efficient
adaptive algorithms for neural networks [0.0]
We present a new class of Langevin based algorithms, which overcomes many of the known shortcomings of popular adaptive vanishing algorithms.
In particular, we provide a nonasymptotic analysis and full theoretical guarantees for the convergence properties of an algorithm of this novel class, which we named TH$varepsilon$O POULA (or, simply, TheoPouLa)
arXiv Detail & Related papers (2021-05-28T15:58:48Z) - Multi-Emitter MAP-Elites: Improving quality, diversity and convergence
speed with heterogeneous sets of emitters [1.827510863075184]
We introduce Multi-Emitter MAP-Elites (ME-MAP-Elites), an algorithm that directly extends CMA-ME and improves its quality, diversity and data efficiency.
A bandit algorithm dynamically finds the best selection of emitters depending on the current situation.
We evaluate the performance of ME-MAP-Elites on six tasks, ranging from standard optimisation problems (in 100 dimensions) to complex locomotion tasks in robotics.
arXiv Detail & Related papers (2020-07-10T12:45:02Z) - Discovering Representations for Black-box Optimization [73.59962178534361]
We show that black-box optimization encodings can be automatically learned, rather than hand designed.
We show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites.
arXiv Detail & Related papers (2020-03-09T20:06:20Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - On the Performance of Metaheuristics: A Different Perspective [0.0]
We study some basic evolutionary and swam-intelligence metaheuristics i.e. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Teaching-Learning-Based Optimization (TLBO) and Cuckoo Optimization algorithm (COA)
A large number of experiments have been conducted on 20 different optimization benchmark functions with different characteristics, and the results reveal to us some fundamental conclusions besides the following ranking order among these metaheuristics.
arXiv Detail & Related papers (2020-01-24T09:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.