Automated Benchmark-Driven Design and Explanation of Hyperparameter
Optimizers
- URL: http://arxiv.org/abs/2111.14756v1
- Date: Mon, 29 Nov 2021 18:02:56 GMT
- Title: Automated Benchmark-Driven Design and Explanation of Hyperparameter
Optimizers
- Authors: Julia Moosbauer, Martin Binder, Lennart Schneider, Florian Pfisterer,
Marc Becker, Michel Lang, Lars Kotthoff, Bernd Bischl
- Abstract summary: We present a principled approach to automated benchmark-driven algorithm design applied to multi parameter HPO (MF-HPO)
First, we formalize a rich space of MF-HPO candidates that includes, but is not limited to common HPO algorithms, and then present a framework covering this space.
We challenge whether the found design choices are necessary or could be replaced by more naive and simpler ones by performing an ablation analysis.
- Score: 3.729201909920989
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated hyperparameter optimization (HPO) has gained great popularity and
is an important ingredient of most automated machine learning frameworks. The
process of designing HPO algorithms, however, is still an unsystematic and
manual process: Limitations of prior work are identified and the improvements
proposed are -- even though guided by expert knowledge -- still somewhat
arbitrary. This rarely allows for gaining a holistic understanding of which
algorithmic components are driving performance, and carries the risk of
overlooking good algorithmic design choices. We present a principled approach
to automated benchmark-driven algorithm design applied to multifidelity HPO
(MF-HPO): First, we formalize a rich space of MF-HPO candidates that includes,
but is not limited to common HPO algorithms, and then present a configurable
framework covering this space. To find the best candidate automatically and
systematically, we follow a programming-by-optimization approach and search
over the space of algorithm candidates via Bayesian optimization. We challenge
whether the found design choices are necessary or could be replaced by more
naive and simpler ones by performing an ablation analysis. We observe that
using a relatively simple configuration, in some ways simpler than established
methods, performs very well as long as some critical configuration parameters
have the right value.
Related papers
- Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - Ranking Cost: Building An Efficient and Scalable Circuit Routing Planner
with Evolution-Based Optimization [49.207538634692916]
We propose a new algorithm for circuit routing, named Ranking Cost, to form an efficient and trainable router.
In our method, we introduce a new set of variables called cost maps, which can help the A* router to find out proper paths.
Our algorithm is trained in an end-to-end manner and does not use any artificial data or human demonstration.
arXiv Detail & Related papers (2021-10-08T07:22:45Z) - Hyperparameter Optimization: Foundations, Algorithms, Best Practices and
Open Challenges [5.139260825952818]
This paper reviews important HPO methods such as grid or random search, evolutionary algorithms, Bayesian optimization, Hyperband and racing.
It gives practical recommendations regarding important choices to be made when conducting HPO, including the HPO algorithms themselves, performance evaluation, how to combine HPO with ML pipelines, runtime improvements, and parallelization.
arXiv Detail & Related papers (2021-07-13T04:55:47Z) - Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm
Selection [0.9281671380673306]
A key challenge in the application of evolutionary algorithms is the selection of an algorithm instance that best suits the problem at hand.
We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems.
arXiv Detail & Related papers (2021-02-12T12:27:02Z) - Towards Large Scale Automated Algorithm Design by Integrating Modular
Benchmarking Frameworks [0.9281671380673306]
We present a first proof-of-concept use-case that demonstrates the efficiency of the algorithm framework ParadisEO with the automated algorithm configuration tool irace and the experimental platform IOHprofiler.
Key advantages of our pipeline are fast evaluation times, the possibility to generate rich data sets, and a standardized interface that can be used to benchmark very broad classes of sampling-based optimizations.
arXiv Detail & Related papers (2021-02-12T10:47:00Z) - Cost-Efficient Online Hyperparameter Optimization [94.60924644778558]
We propose an online HPO algorithm that reaches human expert-level performance within a single run of the experiment.
Our proposed online HPO algorithm reaches human expert-level performance within a single run of the experiment, while incurring only modest computational overhead compared to regular training.
arXiv Detail & Related papers (2021-01-17T04:55:30Z) - Hyperparameter Optimization via Sequential Uniform Designs [4.56877715768796]
This paper reformulates HPO as a computer experiment and proposes a novel sequential uniform design (SeqUD) strategy with three-fold advantages.
The proposed SeqUD strategy outperforms benchmark HPO methods, and it can be therefore a promising and competitive alternative to existing AutoML tools.
arXiv Detail & Related papers (2020-09-08T08:55:02Z) - Hyper-Parameter Optimization: A Review of Algorithms and Applications [14.524227656147968]
This paper provides a review of the most essential topics on automated hyper- parameter optimization (HPO)
The research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks.
The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.
arXiv Detail & Related papers (2020-03-12T10:12:22Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.