Comparing Algorithm Selection Approaches on Black-Box Optimization
Problems
- URL: http://arxiv.org/abs/2306.17585v1
- Date: Fri, 30 Jun 2023 12:06:38 GMT
- Title: Comparing Algorithm Selection Approaches on Black-Box Optimization
Problems
- Authors: Ana Kostovska, Anja Jankovic, Diederick Vermetten, Sa\v{s}o
D\v{z}eroski, Tome Eftimov, Carola Doerr
- Abstract summary: Automated AS relies on machine learning (ML) techniques to recommend the best algorithm.
There are no clear guidelines for choosing the most appropriate one from a variety of ML techniques.
We compare four ML models on the task of predicting the best solver for the black-box optimization problems for 7 different runtime budgets in 2 dimensions.
- Score: 2.3823600586675724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Performance complementarity of solvers available to tackle black-box
optimization problems gives rise to the important task of algorithm selection
(AS). Automated AS approaches can help replace tedious and labor-intensive
manual selection, and have already shown promising performance in various
optimization domains. Automated AS relies on machine learning (ML) techniques
to recommend the best algorithm given the information about the problem
instance. Unfortunately, there are no clear guidelines for choosing the most
appropriate one from a variety of ML techniques. Tree-based models such as
Random Forest or XGBoost have consistently demonstrated outstanding performance
for automated AS. Transformers and other tabular deep learning models have also
been increasingly applied in this context.
We investigate in this work the impact of the choice of the ML technique on
AS performance. We compare four ML models on the task of predicting the best
solver for the BBOB problems for 7 different runtime budgets in 2 dimensions.
While our results confirm that a per-instance AS has indeed impressive
potential, we also show that the particular choice of the ML technique is of
much minor importance.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Large Language Models as Optimizers [106.52386531624532]
We propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as prompts.
In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values.
We demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
arXiv Detail & Related papers (2023-09-07T00:07:15Z) - DynamoRep: Trajectory-Based Population Dynamics for Classification of
Black-box Optimization Problems [0.755972004983746]
We propose a feature extraction method that describes the trajectories of optimization algorithms using simple statistics.
We demonstrate that the proposed DynamoRep features capture enough information to identify the problem class on which the optimization algorithm is running.
arXiv Detail & Related papers (2023-06-08T06:57:07Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Explainable Landscape Analysis in Automated Algorithm Performance
Prediction [0.0]
We investigate the expressiveness of problem landscape features utilized by different supervised machine learning models in automated algorithm performance prediction.
The experimental results point out that the selection of the supervised ML method is crucial, since different supervised ML regression models utilize the problem landscape features differently.
arXiv Detail & Related papers (2022-03-22T15:54:17Z) - Parallel Surrogate-assisted Optimization Using Mesh Adaptive Direct
Search [0.0]
We present a method that employs surrogate models and concurrent computing at the search step of the mesh adaptive direct search (MADS) algorithm.
We conduct numerical experiments to assess the performance of the modified MADS algorithm with respect to available CPU resources.
arXiv Detail & Related papers (2021-07-26T18:28:56Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Black-Box Optimization Revisited: Improving Algorithm Selection Wizards
through Massive Benchmarking [8.874754363200614]
Existing studies in black-box optimization for machine learning suffer from low generalizability.
We propose a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems.
ABBO achieves competitive performance on all benchmark suites.
arXiv Detail & Related papers (2020-10-08T14:17:30Z) - Discovering Representations for Black-box Optimization [73.59962178534361]
We show that black-box optimization encodings can be automatically learned, rather than hand designed.
We show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites.
arXiv Detail & Related papers (2020-03-09T20:06:20Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.