Optimizing with Low Budgets: a Comparison on the Black-box Optimization
Benchmarking Suite and OpenAI Gym
- URL: http://arxiv.org/abs/2310.00077v3
- Date: Tue, 2 Jan 2024 22:54:55 GMT
- Title: Optimizing with Low Budgets: a Comparison on the Black-box Optimization
Benchmarking Suite and OpenAI Gym
- Authors: Elena Raponi, Nathanael Rakotonirina Carraz, J\'er\'emy Rapin, Carola
Doerr, Olivier Teytaud
- Abstract summary: Black-box optimization (BO) algorithms are popular in machine learning (ML)
We compare BBO tools for ML with more classical COCOs.
Some algorithms from the BBO community perform surprisingly well on ML tasks.
- Score: 2.511157007295545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing ubiquity of machine learning (ML) has led it to enter various
areas of computer science, including black-box optimization (BBO). Recent
research is particularly concerned with Bayesian optimization (BO). BO-based
algorithms are popular in the ML community, as they are used for hyperparameter
optimization and more generally for algorithm configuration. However, their
efficiency decreases as the dimensionality of the problem and the budget of
evaluations increase. Meanwhile, derivative-free optimization methods have
evolved independently in the optimization community. Therefore, we urge to
understand whether cross-fertilization is possible between the two communities,
ML and BBO, i.e., whether algorithms that are heavily used in ML also work well
in BBO and vice versa. Comparative experiments often involve rather small
benchmarks and show visible problems in the experimental setup, such as poor
initialization of baselines, overfitting due to problem-specific setting of
hyperparameters, and low statistical significance.
With this paper, we update and extend a comparative study presented by Hutter
et al. in 2013. We compare BBO tools for ML with more classical heuristics,
first on the well-known BBOB benchmark suite from the COCO environment and then
on Direct Policy Search for OpenAI Gym, a reinforcement learning benchmark. Our
results confirm that BO-based optimizers perform well on both benchmarks when
budgets are limited, albeit with a higher computational cost, while they are
often outperformed by algorithms from other families when the evaluation budget
becomes larger. We also show that some algorithms from the BBO community
perform surprisingly well on ML tasks.
Related papers
- Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling [96.47086913559289]
gradient-based algorithms are widely used in bilevel optimization.
We introduce a without-replacement sampling based algorithm which achieves a faster convergence rate.
We validate our algorithms over both synthetic and real-world applications.
arXiv Detail & Related papers (2024-11-07T17:05:31Z) - $f$-PO: Generalizing Preference Optimization with $f$-divergence Minimization [91.43730624072226]
$f$-PO is a novel framework that generalizes and extends existing approaches.
We conduct experiments on state-of-the-art language models using benchmark datasets.
arXiv Detail & Related papers (2024-10-29T02:11:45Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Comparison of High-Dimensional Bayesian Optimization Algorithms on BBOB [0.40498500266986387]
We compare five state-of-the-art high-dimensional BO algorithms, with vanilla and CMA-ES, at increasing dimensionality, ranging from 10 to 60 variables.
Our results confirm the superiority of BO over CMA-ES for limited evaluation budgets.
arXiv Detail & Related papers (2023-03-02T01:14:15Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian
Optimization [40.30019289383378]
We propose $pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum.
In contrast to previous approaches, $pi$BO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions.
We also demonstrate that $pi$BO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 $times$ time-to-accuracy speedup over prominent BO approaches.
arXiv Detail & Related papers (2022-04-23T11:07:13Z) - MBORE: Multi-objective Bayesian Optimisation by Density-Ratio Estimation [0.01652719262940403]
optimisation problems often have multiple conflicting objectives that can be computationally and/or financially expensive.
Mono-surrogate Bayesian optimisation (BO) is a popular model-based approach for optimising such black-box functions.
We extend previous work on BO by density-ratio estimation (BORE) to the multi-objective setting.
arXiv Detail & Related papers (2022-03-31T09:27:59Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Revisiting Bayesian Optimization in the light of the COCO benchmark [1.4467794332678539]
This article reports a large investigation about the effects on the performance of (Gaussian process based) BO of common and less common design choices.
The code developed for this study makes the new version (v2.1.1) of the R package DiceOptim available on CRAN.
arXiv Detail & Related papers (2021-03-30T19:45:18Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Black-Box Optimization Revisited: Improving Algorithm Selection Wizards
through Massive Benchmarking [8.874754363200614]
Existing studies in black-box optimization for machine learning suffer from low generalizability.
We propose a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems.
ABBO achieves competitive performance on all benchmark suites.
arXiv Detail & Related papers (2020-10-08T14:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.