Fast Perturbative Algorithm Configurators
- URL: http://arxiv.org/abs/2007.03336v1
- Date: Tue, 7 Jul 2020 10:48:32 GMT
- Title: Fast Perturbative Algorithm Configurators
- Authors: George T. Hall, Pietro Simone Oliveto, Dirk Sudholt
- Abstract summary: We prove a linear lower bound on the expected time to optimise any parameter problem for ParamRLS and ParamILS.
We propose a harmonic mutation operator for perturbative algorithms in polylogarithmic time for unimodal and approximately unimodal.
An experimental analysis confirms the superiority of the approach in practice for a number of configuration scenarios.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that the ParamRLS and ParamILS algorithm configurators
can tune some simple randomised search heuristics for standard benchmark
functions in linear expected time in the size of the parameter space. In this
paper we prove a linear lower bound on the expected time to optimise any
parameter tuning problem for ParamRLS, ParamILS as well as for larger classes
of algorithm configurators. We propose a harmonic mutation operator for
perturbative algorithm configurators that provably tunes single-parameter
algorithms in polylogarithmic time for unimodal and approximately unimodal
(i.e., non-smooth, rugged with an underlying gradient towards the optimum)
parameter spaces. It is suitable as a general-purpose operator since even on
worst-case (e.g., deceptive) landscapes it is only by at most a logarithmic
factor slower than the default ones used by ParamRLS and ParamILS. An
experimental analysis confirms the superiority of the approach in practice for
a number of configuration scenarios, including ones involving more than one
parameter.
Related papers
- Utilitarian Algorithm Configuration for Infinite Parameter Spaces [9.056433954813969]
Utilitarian algorithm configuration is a technique for automatically searching the parameter space of a given algorithm to optimize its performance.
We introduce COUP (Continuous, Optimistic Utilitarian Procrastination), which is designed to search infinite parameter spaces efficiently.
arXiv Detail & Related papers (2024-05-28T14:58:07Z) - A Multi-objective Newton Optimization Algorithm for Hyper-Parameter
Search [0.0]
The algorithm is applied to search the optimal probability threshold (a vector of eight parameters) for a multiclass object detection problem of a convolutional neural network.
The algorithm produces an overall higher true positive (TP) and lower false positive (FP) rates, as compared to using the default value of 0.5.
arXiv Detail & Related papers (2024-01-07T21:12:34Z) - Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm
Configuration [32.055812915031666]
We show how to compute optimal parameter portfolios of a given size.
We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values.
We demonstrate the usefulness of our benchmarks by analyzing the behavior of the DDQN reinforcement learning approach for dynamic algorithm configuration.
arXiv Detail & Related papers (2022-02-07T15:00:30Z) - Continuation Path with Linear Convergence Rate [18.405645120971496]
Path-following algorithms are frequently used in composite optimization problems where a series of subproblems are solved sequentially.
We present a primal dual analysis of the path-following algorithm as well as determining how accurately each subproblem should be solved to guarantee a linear convergence rate on a target problem.
Considering optimization with a sparsity-inducing penalty, we analyze the change of the active sets with respect to the regularization parameter.
arXiv Detail & Related papers (2021-12-09T18:42:13Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Analysis of the Performance of Algorithm Configurators for Search
Heuristics with Global Mutation Operators [0.0]
ParamRLS can efficiently identify the optimal neighbourhood size to be used by local search.
We show that the simple ParamRLS-F can identify the optimal mutation rates even when using cutoff times that are considerably smaller than the expected optimisation time of the best parameter value for both problem classes.
arXiv Detail & Related papers (2020-04-09T12:42:30Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.