Towards Dynamic Algorithm Selection for Numerical Black-Box
Optimization: Investigating BBOB as a Use Case
- URL: http://arxiv.org/abs/2006.06586v1
- Date: Thu, 11 Jun 2020 16:36:11 GMT
- Title: Towards Dynamic Algorithm Selection for Numerical Black-Box
Optimization: Investigating BBOB as a Use Case
- Authors: Diederick Vermetten, Hao Wang, Carola Doerr, Thomas B\"ack
- Abstract summary: We show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains.
We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these.
- Score: 4.33419118449588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most challenging problems in evolutionary computation is to select
from its family of diverse solvers one that performs well on a given problem.
This algorithm selection problem is complicated by the fact that different
phases of the optimization process require different search behavior. While
this can partly be controlled by the algorithm itself, there exist large
differences between algorithm performance. It can therefore be beneficial to
swap the configuration or even the entire algorithm during the run. Long deemed
impractical, recent advances in Machine Learning and in exploratory landscape
analysis give hope that this dynamic algorithm configuration~(dynAC) can
eventually be solved by automatically trained configuration schedules. With
this work we aim at promoting research on dynAC, by introducing a simpler
variant that focuses only on switching between different algorithms, not
configurations. Using the rich data from the Black Box Optimization
Benchmark~(BBOB) platform, we show that even single-switch dynamic Algorithm
selection (dynAS) can potentially result in significant performance gains. We
also discuss key challenges in dynAS, and argue that the BBOB-framework can
become a useful tool in overcoming these.
Related papers
- Exploratory Landscape Analysis for Mixed-Variable Problems [0.7252027234425334]
We provide the means to compute exploratory landscape features for mixed-variable problems where the decision space is a mixture of continuous, binary, integer, and categorical variables.
To further highlight their merit for practical applications, we design and conduct an automated algorithm selection study.
Our trained algorithm selector is able to close the gap between the single best and the virtual best solver by 57.5% over all benchmark problems.
arXiv Detail & Related papers (2024-02-26T10:19:23Z) - HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection [75.84584400866254]
We propose a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses.
HARRIS' decisions are based on a forest model, whose trees are created based on optimized on a hybrid ranking and regression loss function.
arXiv Detail & Related papers (2022-10-31T14:06:11Z) - Switching between Numerical Black-box Optimization Algorithms with
Warm-starting Policies [3.967483941966979]
We build on Vermetten et al. [GECCO 2020] to investigate promising switches between pairs of algorithms for numerical black-box optimization.
We show that with a single switch between two algorithms, we outperform the best static choice among the five algorithms.
We also show that for switching between BFGS and CMA-ES, a proper warm-starting of the parameters is crucial to realize high-performance gains.
arXiv Detail & Related papers (2022-04-13T17:35:14Z) - OptABC: an Optimal Hyperparameter Tuning Approach for Machine Learning
Algorithms [1.6114012813668934]
OptABC is proposed to help ABC algorithm in faster convergence toward a near-optimum solution.
OptABC integrates artificial bee colony algorithm, K-Means clustering, greedy algorithm, and opposition-based learning strategy.
Experimental results demonstrate the effectiveness of OptABC compared to existing approaches in the literature.
arXiv Detail & Related papers (2021-12-15T22:33:39Z) - Algorithm Selection on a Meta Level [58.720142291102135]
We introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors.
We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework.
arXiv Detail & Related papers (2021-07-20T11:23:21Z) - Generating Instances with Performance Differences for More Than Just Two
Algorithms [2.061388741385401]
We propose fitness-functions to evolve instances that show large performance differences for more than just two algorithms simultaneously.
As a proof-of-principle, we evolve instances of the multi-component Traveling Thief Problem(TTP) for three incomplete TTP-solvers.
arXiv Detail & Related papers (2021-04-29T11:48:41Z) - Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm
Selection [0.9281671380673306]
A key challenge in the application of evolutionary algorithms is the selection of an algorithm instance that best suits the problem at hand.
We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems.
arXiv Detail & Related papers (2021-02-12T12:27:02Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.