Automated Dynamic Algorithm Configuration
- URL: http://arxiv.org/abs/2205.13881v1
- Date: Fri, 27 May 2022 10:30:25 GMT
- Title: Automated Dynamic Algorithm Configuration
- Authors: Steven Adriaensen, Andr\'e Biedenkapp, Gresa Shala, Noor Awad, Theresa
Eimer, Marius Lindauer and Frank Hutter
- Abstract summary: The performance of an algorithm often critically depends on its parameter configuration.
It has been shown that some algorithm parameters are best adjusted dynamically during execution.
A promising alternative is to automatically learn such dynamic parameter adaptation policies from data.
- Score: 39.39845379026921
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of an algorithm often critically depends on its parameter
configuration. While a variety of automated algorithm configuration methods
have been proposed to relieve users from the tedious and error-prone task of
manually tuning parameters, there is still a lot of untapped potential as the
learned configuration is static, i.e., parameter settings remain fixed
throughout the run. However, it has been shown that some algorithm parameters
are best adjusted dynamically during execution, e.g., to adapt to the current
part of the optimization landscape. Thus far, this is most commonly achieved
through hand-crafted heuristics. A promising recent alternative is to
automatically learn such dynamic parameter adaptation policies from data. In
this article, we give the first comprehensive account of this new field of
automated dynamic algorithm configuration (DAC), present a series of recent
advances, and provide a solid foundation for future research in this field.
Specifically, we (i) situate DAC in the broader historical context of AI
research; (ii) formalize DAC as a computational problem; (iii) identify the
methods used in prior-art to tackle this problem; (iv) conduct empirical case
studies for using DAC in evolutionary optimization, AI planning, and machine
learning.
Related papers
- AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - Using Automated Algorithm Configuration for Parameter Control [0.7742297876120562]
Dynamic Algorithm configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion.
We propose a new DAC benchmark the controlling of the key parameter $lambda$ in the $(lambda,lambda)$Genetic Algorithm for solving OneMax problems.
Our approach is able to consistently outperform the default parameter control policy of the benchmark derived from previous theoretical work on sufficiently large problem sizes.
arXiv Detail & Related papers (2023-02-23T20:57:47Z) - Multi-agent Dynamic Algorithm Configuration [29.065510165544865]
Automated algorithm configuration relieves users from tedious, trial-and-error tuning tasks.
In this paper, we propose multi-agent DAC (MA-DAC) for complex algorithms.
We show that MA-DAC achieves superior performance compared with other configuration tuning approaches.
arXiv Detail & Related papers (2022-10-13T08:39:32Z) - Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm
Configuration [32.055812915031666]
We show how to compute optimal parameter portfolios of a given size.
We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values.
We demonstrate the usefulness of our benchmarks by analyzing the behavior of the DDQN reinforcement learning approach for dynamic algorithm configuration.
arXiv Detail & Related papers (2022-02-07T15:00:30Z) - Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm
Selection [0.9281671380673306]
A key challenge in the application of evolutionary algorithms is the selection of an algorithm instance that best suits the problem at hand.
We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems.
arXiv Detail & Related papers (2021-02-12T12:27:02Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Landscape-Aware Fixed-Budget Performance Regression and Algorithm
Selection for Modular CMA-ES Variants [1.0965065178451106]
We show that it is possible to achieve high-quality performance predictions with off-the-shelf supervised learning approaches.
We test this approach on a portfolio of very similar algorithms, which we choose from the family of modular CMA-ES algorithms.
arXiv Detail & Related papers (2020-06-17T13:34:57Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.