Improving Nevergrad's Algorithm Selection Wizard NGOpt through Automated
Algorithm Configuration
- URL: http://arxiv.org/abs/2209.04412v1
- Date: Fri, 9 Sep 2022 17:28:10 GMT
- Title: Improving Nevergrad's Algorithm Selection Wizard NGOpt through Automated
Algorithm Configuration
- Authors: Risto Trajanov, Ana Nikolikj, Gjorgjina Cenikj, Fabien Teytaud,
Mathurin Videau, Olivier Teytaud, Tome Eftimov, Manuel L\'opez-Ib\'a\~nez,
Carola Doerr
- Abstract summary: State-of-the-art algorithm selection wizards are complex and difficult to improve.
We propose the use of automated configuration methods for improving their performance.
- Score: 2.649483400176735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithm selection wizards are effective and versatile tools that
automatically select an optimization algorithm given high-level information
about the problem and available computational resources, such as number and
type of decision variables, maximal number of evaluations, possibility to
parallelize evaluations, etc. State-of-the-art algorithm selection wizards are
complex and difficult to improve. We propose in this work the use of automated
configuration methods for improving their performance by finding better
configurations of the algorithms that compose them. In particular, we use
elitist iterated racing (irace) to find CMA configurations for specific
artificial benchmarks that replace the hand-crafted CMA configurations
currently used in the NGOpt wizard provided by the Nevergrad platform. We
discuss in detail the setup of irace for the purpose of generating
configurations that work well over the diverse set of problem instances within
each benchmark. Our approach improves the performance of the NGOpt wizard, even
on benchmark suites that were not part of the tuning by irace.
Related papers
- CATBench: A Compiler Autotuning Benchmarking Suite for Black-box Optimization [5.909352339240516]
We present CATBench, a comprehensive benchmarking suite that captures the complexities of compiler autotuning.
The benchmarks in CATBench span a range of machine learning-oriented computations, from tensor algebra to image processing and clustering.
We validate CATBench on several state-of-the-art algorithms, revealing their strengths and weaknesses.
arXiv Detail & Related papers (2024-06-24T20:15:04Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Non-Elitist Selection Can Improve the Performance of Irace [0.8258451067861933]
We study two alternative selection methods for tuning ant colony optimization algorithms for traveling salesperson problems and the quadratic assignment problem.
The experimental results show improvement on the tested benchmarks compared to the default selection of irace.
In addition, the obtained results indicate that non-elitist can obtain diverse algorithm configurations, which encourages us to explore a wider range of solutions to understand the behavior of algorithms.
arXiv Detail & Related papers (2022-03-17T10:34:30Z) - Algorithm Selection on a Meta Level [58.720142291102135]
We introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors.
We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework.
arXiv Detail & Related papers (2021-07-20T11:23:21Z) - Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm
Selection [0.9281671380673306]
A key challenge in the application of evolutionary algorithms is the selection of an algorithm instance that best suits the problem at hand.
We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems.
arXiv Detail & Related papers (2021-02-12T12:27:02Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Towards Dynamic Algorithm Selection for Numerical Black-Box
Optimization: Investigating BBOB as a Use Case [4.33419118449588]
We show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains.
We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these.
arXiv Detail & Related papers (2020-06-11T16:36:11Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.