Quality-Diversity Optimization: a novel branch of stochastic
optimization
- URL: http://arxiv.org/abs/2012.04322v2
- Date: Thu, 17 Dec 2020 00:50:04 GMT
- Title: Quality-Diversity Optimization: a novel branch of stochastic
optimization
- Authors: Konstantinos Chatzilygeroudis, Antoine Cully, Vassilis Vassiliades and
Jean-Baptiste Mouret
- Abstract summary: Multimodal optimization algorithms search for the highest peaks in the search space that can be more than one.
Quality-Diversity algorithms are a recent addition to the evolutionary computation toolbox that do not only search for a single set of local optima, but instead try to illuminate the search space.
- Score: 5.677685109155078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional optimization algorithms search for a single global optimum that
maximizes (or minimizes) the objective function. Multimodal optimization
algorithms search for the highest peaks in the search space that can be more
than one. Quality-Diversity algorithms are a recent addition to the
evolutionary computation toolbox that do not only search for a single set of
local optima, but instead try to illuminate the search space. In effect, they
provide a holistic view of how high-performing solutions are distributed
throughout a search space. The main differences with multimodal optimization
algorithms are that (1) Quality-Diversity typically works in the behavioral
space (or feature space), and not in the genotypic (or parameter) space, and
(2) Quality-Diversity attempts to fill the whole behavior space, even if the
niche is not a peak in the fitness landscape. In this chapter, we provide a
gentle introduction to Quality-Diversity optimization, discuss the main
representative algorithms, and the main current topics under consideration in
the community. Throughout the chapter, we also discuss several successful
applications of Quality-Diversity algorithms, including deep learning,
robotics, and reinforcement learning.
Related papers
- Multi-objective Evolution of Heuristic Using Large Language Model [29.337470185034555]
Heuristics are commonly used to tackle diverse search and optimization problems.
Recent works have incorporated large language models (LLMs) into automatic search leveraging their powerful language and coding capacity.
We propose to model search as a multi-objective optimization problem and consider introducing other practical criteria beyond optimal performance.
arXiv Detail & Related papers (2024-09-25T12:32:41Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Quality-Diversity Algorithms Can Provably Be Helpful for Optimization [24.694984679399315]
Quality-Diversity (QD) algorithms aim to find a set of high-performing, yet diverse solutions.
This paper tries to shed some light on the optimization ability of QD algorithms via rigorous running time analysis.
arXiv Detail & Related papers (2024-01-19T07:40:24Z) - Rank-Based Learning and Local Model Based Evolutionary Algorithm for High-Dimensional Expensive Multi-Objective Problems [1.0499611180329806]
The proposed algorithm consists of three parts: rank-based learning, hyper-volume-based non-dominated search, and local search in the relatively sparse objective space.
The experimental results of benchmark problems and a real-world application on geothermal reservoir heat extraction optimization demonstrate that the proposed algorithm shows superior performance.
arXiv Detail & Related papers (2023-04-19T06:25:04Z) - Don't Bet on Luck Alone: Enhancing Behavioral Reproducibility of
Quality-Diversity Solutions in Uncertain Domains [2.639902239625779]
We introduce Archive Reproducibility Improvement Algorithm (ARIA)
ARIA is a plug-and-play approach that improves the quality of solutions present in an archive.
We show that our algorithm enhances the quality and descriptor space coverage of any given archive by at least 50%.
arXiv Detail & Related papers (2023-04-07T14:45:14Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Fighting the curse of dimensionality: A machine learning approach to
finding global optima [77.34726150561087]
This paper shows how to find global optima in structural optimization problems.
By exploiting certain cost functions we either obtain the global at best or obtain superior results at worst when compared to established optimization procedures.
arXiv Detail & Related papers (2021-10-28T09:50:29Z) - AutoSpace: Neural Architecture Search with Less Human Interference [84.42680793945007]
Current neural architecture search (NAS) algorithms still require expert knowledge and effort to design a search space for network construction.
We propose a novel differentiable evolutionary framework named AutoSpace, which evolves the search space to an optimal one.
With the learned search space, the performance of recent NAS algorithms can be improved significantly compared with using previously manually designed spaces.
arXiv Detail & Related papers (2021-03-22T13:28:56Z) - Community detection using fast low-cardinality semidefinite programming [94.4878715085334]
We propose a new low-cardinality algorithm that generalizes the local update to maximize a semidefinite relaxation derived from Leiden-k-cut.
This proposed algorithm is scalable, outperforms state-of-the-art algorithms, and outperforms in real-world time with little additional cost.
arXiv Detail & Related papers (2020-12-04T15:46:30Z) - Evolving Search Space for Neural Architecture Search [70.71153433676024]
We present a Neural Search-space Evolution (NSE) scheme that amplifies the results from the previous effort by maintaining an optimized search space subset.
We achieve 77.3% top-1 retrain accuracy on ImageNet with 333M FLOPs, which yielded a state-of-the-art performance.
When the latency constraint is adopted, our result also performs better than the previous best-performing mobile models with a 77.9% Top-1 retrain accuracy.
arXiv Detail & Related papers (2020-11-22T01:11:19Z) - BOP-Elites, a Bayesian Optimisation algorithm for Quality-Diversity
search [0.0]
We propose the Bayesian optimisation of Elites (BOP-Elites) algorithm.
By considering user defined regions of the feature space as 'niches' our task is to find the optimal solution in each niche.
The resulting algorithm is very effective in identifying the parts of the search space that belong to a niche in feature space, and finding the optimal solution in each niche.
arXiv Detail & Related papers (2020-05-08T23:49:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.