Black-Box Optimization Revisited: Improving Algorithm Selection Wizards
through Massive Benchmarking
- URL: http://arxiv.org/abs/2010.04542v3
- Date: Tue, 23 Feb 2021 10:36:09 GMT
- Title: Black-Box Optimization Revisited: Improving Algorithm Selection Wizards
through Massive Benchmarking
- Authors: Laurent Meunier, Herilalaina Rakotoarison, Pak Kan Wong, Baptiste
Roziere, Jeremy Rapin, Olivier Teytaud, Antoine Moreau, Carola Doerr
- Abstract summary: Existing studies in black-box optimization for machine learning suffer from low generalizability.
We propose a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems.
ABBO achieves competitive performance on all benchmark suites.
- Score: 8.874754363200614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing studies in black-box optimization for machine learning suffer from
low generalizability, caused by a typically selective choice of problem
instances used for training and testing different optimization algorithms.
Among other issues, this practice promotes overfitting and poor-performing user
guidelines. To address this shortcoming, we propose in this work a benchmark
suite, OptimSuite, which covers a broad range of black-box optimization
problems, ranging from academic benchmarks to real-world applications, from
discrete over numerical to mixed-integer problems, from small to very
large-scale problems, from noisy over dynamic to static problems, etc. We
demonstrate the advantages of such a broad collection by deriving from it
Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection
wizard. Using three different types of algorithm selection techniques, ABBO
achieves competitive performance on all benchmark suites. It significantly
outperforms previous state of the art on some of them, including YABBOB and
LSGO. ABBO relies on many high-quality base components. Its excellent
performance is obtained without any task-specific parametrization.
The OptimSuite benchmark collection, the ABBO wizard and its base solvers
have all been merged into the open-source Nevergrad platform, where they are
available for reproducible research.
Related papers
- Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Optimizing with Low Budgets: a Comparison on the Black-box Optimization
Benchmarking Suite and OpenAI Gym [2.511157007295545]
Black-box optimization (BO) algorithms are popular in machine learning (ML)
We compare BBO tools for ML with more classical COCOs.
Some algorithms from the BBO community perform surprisingly well on ML tasks.
arXiv Detail & Related papers (2023-09-29T18:33:10Z) - Comparing Algorithm Selection Approaches on Black-Box Optimization
Problems [2.3823600586675724]
Automated AS relies on machine learning (ML) techniques to recommend the best algorithm.
There are no clear guidelines for choosing the most appropriate one from a variety of ML techniques.
We compare four ML models on the task of predicting the best solver for the black-box optimization problems for 7 different runtime budgets in 2 dimensions.
arXiv Detail & Related papers (2023-06-30T12:06:38Z) - Evidence that PUBO outperforms QUBO when solving continuous optimization
problems with the QAOA [4.670374869377859]
A core step in solving optimization problems with quantum algorithms is the problem formulation.
Recent studies show that many problems can be solved more efficiently in their native Polynomial Unconstrained Optimization forms.
Our evaluation on suitable benchmark functions, shows that PUBO formulations generally yield better results, while requiring less qubits.
arXiv Detail & Related papers (2023-05-05T09:37:48Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - On the Assessment of Benchmark Suites for Algorithm Comparison [7.501426386641256]
We show that most benchmark functions of BBOB suite have high difficulty levels (compared to the optimization algorithms) and low discrimination.
We discuss potential uses of IRT in benchmarking, including its use to improve the design of benchmark suites.
arXiv Detail & Related papers (2021-04-15T11:20:11Z) - Versatile Black-Box Optimization [6.92253842648213]
We present Shiwa, an algorithm good at both discrete and continuous, noisy and noise-free, sequential and parallel, black-box optimization.
Our algorithm is experimentally compared to competitors on YABBOB, a BBOB comparable testbed, and on some variants of it, and then validated on several real world testbeds.
arXiv Detail & Related papers (2020-04-29T08:20:36Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.