Fast and stable MAP-Elites in noisy domains using deep grids
- URL: http://arxiv.org/abs/2006.14253v1
- Date: Thu, 25 Jun 2020 08:47:23 GMT
- Title: Fast and stable MAP-Elites in noisy domains using deep grids
- Authors: Manon Flageat, Antoine Cully
- Abstract summary: Deep-Grid MAP-Elites is a variant of the MAP-Elites algorithm that uses an archive of similar previously encountered solutions to approximate the performance of a solution.
We show that this simple approach is significantly more resilient to noise on the behavioural descriptors, while achieving competitive performances in terms of fitness optimisation.
- Score: 1.827510863075184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quality-Diversity optimisation algorithms enable the evolution of collections
of both high-performing and diverse solutions. These collections offer the
possibility to quickly adapt and switch from one solution to another in case it
is not working as expected. It therefore finds many applications in real-world
domain problems such as robotic control. However, QD algorithms, like most
optimisation algorithms, are very sensitive to uncertainty on the fitness
function, but also on the behavioural descriptors. Yet, such uncertainties are
frequent in real-world applications. Few works have explored this issue in the
specific case of QD algorithms, and inspired by the literature in Evolutionary
Computation, mainly focus on using sampling to approximate the "true" value of
the performances of a solution. However, sampling approaches require a high
number of evaluations, which in many applications such as robotics, can quickly
become impractical. In this work, we propose Deep-Grid MAP-Elites, a variant of
the MAP-Elites algorithm that uses an archive of similar previously encountered
solutions to approximate the performance of a solution. We compare our approach
to previously explored ones on three noisy tasks: a standard optimisation task,
the control of a redundant arm and a simulated Hexapod robot. The experimental
results show that this simple approach is significantly more resilient to noise
on the behavioural descriptors, while achieving competitive performances in
terms of fitness optimisation, and being more sample-efficient than other
existing approaches.
Related papers
- Benchmarking Optimizers for Qumode State Preparation with Variational Quantum Algorithms [10.941053143198092]
There has been a growing interest in qumodes due to advancements in the field and their potential applications.
This paper aims to bridge this gap by providing performance benchmarks of various parameters used in state preparation with Variational Quantum Algorithms.
arXiv Detail & Related papers (2024-05-07T17:15:58Z) - Quality-Diversity Algorithms Can Provably Be Helpful for Optimization [24.694984679399315]
Quality-Diversity (QD) algorithms aim to find a set of high-performing, yet diverse solutions.
This paper tries to shed some light on the optimization ability of QD algorithms via rigorous running time analysis.
arXiv Detail & Related papers (2024-01-19T07:40:24Z) - Batch Bayesian Optimization for Replicable Experimental Design [56.64902148159355]
Many real-world design problems evaluate multiple experimental conditions in parallel and replicate each condition multiple times due to large and heteroscedastic observation noise.
We propose the Batch Thompson Sampling for Replicable Experimental Design framework, which encompasses three algorithms.
We show the effectiveness of our algorithms in two practical real-world applications: precision agriculture and AutoML.
arXiv Detail & Related papers (2023-11-02T12:46:03Z) - Don't Bet on Luck Alone: Enhancing Behavioral Reproducibility of
Quality-Diversity Solutions in Uncertain Domains [2.639902239625779]
We introduce Archive Reproducibility Improvement Algorithm (ARIA)
ARIA is a plug-and-play approach that improves the quality of solutions present in an archive.
We show that our algorithm enhances the quality and descriptor space coverage of any given archive by at least 50%.
arXiv Detail & Related papers (2023-04-07T14:45:14Z) - Enhancing MAP-Elites with Multiple Parallel Evolution Strategies [8.585387103144825]
We propose a novel Quality-Diversity (QD) algorithm based on Evolution Strategies (ES)
MEMES maintains multiple (up to 100) simultaneous ES processes, each with its own independent objective and reset mechanism designed for QD optimisation.
We show that MEMES outperforms both gradient-based and mutation-based QD algorithms on black-box optimisation and QD-Reinforcement-Learning tasks.
arXiv Detail & Related papers (2023-03-10T18:55:02Z) - A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization [0.0]
We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
arXiv Detail & Related papers (2022-01-18T03:31:11Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - AP-Loss for Accurate One-Stage Object Detection [49.13608882885456]
One-stage object detectors are trained by optimizing classification-loss and localization-loss simultaneously.
The former suffers much from extreme foreground-background imbalance due to the large number of anchors.
This paper proposes a novel framework to replace the classification task in one-stage detectors with a ranking task.
arXiv Detail & Related papers (2020-08-17T13:22:01Z) - Beyond Worst-Case Analysis in Stochastic Approximation: Moment
Estimation Improves Instance Complexity [58.70807593332932]
We study oracle complexity of gradient based methods for approximation problems.
We focus on instance-dependent complexity instead of worst case complexity.
Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation.
arXiv Detail & Related papers (2020-06-08T09:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.