Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm
Configuration
- URL: http://arxiv.org/abs/2202.03259v1
- Date: Mon, 7 Feb 2022 15:00:30 GMT
- Title: Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm
Configuration
- Authors: Andr\'e Biedenkapp, Nguyen Dang, Martin S. Krejca, Frank Hutter,
Carola Doerr
- Abstract summary: We show how to compute optimal parameter portfolios of a given size.
We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values.
We demonstrate the usefulness of our benchmarks by analyzing the behavior of the DDQN reinforcement learning approach for dynamic algorithm configuration.
- Score: 32.055812915031666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has long been observed that the performance of evolutionary algorithms and
other randomized search heuristics can benefit from a non-static choice of the
parameters that steer their optimization behavior. Mechanisms that identify
suitable configurations on the fly ("parameter control") or via a dedicated
training process ("dynamic algorithm configuration") are therefore an important
component of modern evolutionary computation frameworks. Several approaches to
address the dynamic parameter setting problem exist, but we barely understand
which ones to prefer for which applications. As in classical benchmarking,
problem collections with a known ground truth can offer very meaningful
insights in this context. Unfortunately, settings with well-understood control
policies are very rare.
One of the few exceptions for which we know which parameter settings minimize
the expected runtime is the LeadingOnes problem. We extend this benchmark by
analyzing optimal control policies that can select the parameters only from a
given portfolio of possible values. This also allows us to compute optimal
parameter portfolios of a given size. We demonstrate the usefulness of our
benchmarks by analyzing the behavior of the DDQN reinforcement learning
approach for dynamic algorithm configuration.
Related papers
- End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - A Unified Gaussian Process for Branching and Nested Hyperparameter
Optimization [19.351804144005744]
In deep learning, tuning parameters with conditional dependence are common in practice.
New GP model accounts for the dependent structure among input variables through a new kernel function.
High prediction accuracy and better optimization efficiency are observed in a series of synthetic simulations and real data applications of neural networks.
arXiv Detail & Related papers (2024-01-19T21:11:32Z) - Tree-Structured Parzen Estimator: Understanding Its Algorithm Components
and Their Roles for Better Empirical Performance [1.370633147306388]
Tree-structured Parzen estimator (TPE) is widely used in recent parameter tuning frameworks.
Despite its popularity, the roles of each control parameter and the algorithm intuition have not been discussed so far.
arXiv Detail & Related papers (2023-04-21T17:02:38Z) - Using Automated Algorithm Configuration for Parameter Control [0.7742297876120562]
Dynamic Algorithm configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion.
We propose a new DAC benchmark the controlling of the key parameter $lambda$ in the $(lambda,lambda)$Genetic Algorithm for solving OneMax problems.
Our approach is able to consistently outperform the default parameter control policy of the benchmark derived from previous theoretical work on sufficiently large problem sizes.
arXiv Detail & Related papers (2023-02-23T20:57:47Z) - Online Continuous Hyperparameter Optimization for Generalized Linear Contextual Bandits [55.03293214439741]
In contextual bandits, an agent sequentially makes actions from a time-dependent action set based on past experience.
We propose the first online continuous hyperparameter tuning framework for contextual bandits.
We show that it could achieve a sublinear regret in theory and performs consistently better than all existing methods on both synthetic and real datasets.
arXiv Detail & Related papers (2023-02-18T23:31:20Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - A Comparative study of Hyper-Parameter Optimization Tools [2.6097538974670935]
We compare the performance of four python libraries, namely Optuna, Hyperopt, Optunity, and sequential model algorithm configuration (SMAC)
We found that Optuna has better performance for CASH problem and NeurIPS black-box optimization challenge.
arXiv Detail & Related papers (2022-01-17T14:49:36Z) - Parameter Tuning Strategies for Metaheuristic Methods Applied to
Discrete Optimization of Structural Design [0.0]
This paper presents several strategies to tune the parameters of metaheuristic methods for (discrete) design optimization of reinforced concrete (RC) structures.
A novel utility metric is proposed, based on the area under the average performance curve.
arXiv Detail & Related papers (2021-10-12T17:34:39Z) - On the Optimality of Batch Policy Optimization Algorithms [106.89498352537682]
Batch policy optimization considers leveraging existing data for policy construction before interacting with an environment.
We show that any confidence-adjusted index algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral.
We introduce a new weighted-minimax criterion that considers the inherent difficulty of optimal value prediction.
arXiv Detail & Related papers (2021-04-06T05:23:20Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.