Towards Feature-Based Performance Regression Using Trajectory Data
- URL: http://arxiv.org/abs/2102.05370v1
- Date: Wed, 10 Feb 2021 10:19:13 GMT
- Title: Towards Feature-Based Performance Regression Using Trajectory Data
- Authors: Anja Jankovic and Tome Eftimov and Carola Doerr
- Abstract summary: Black-box optimization is a very active area of research, with many new algorithms being developed every year.
The variety of algorithms poses a meta-problem: which algorithm to choose for a given problem at hand?
Past research has shown that per-instance algorithm selection based on exploratory landscape analysis can be an efficient mean to tackle this meta-problem.
- Score: 0.9281671380673306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Black-box optimization is a very active area of research, with many new
algorithms being developed every year. This variety is needed, on the one hand,
since different algorithms are most suitable for different types of
optimization problems. But the variety also poses a meta-problem: which
algorithm to choose for a given problem at hand? Past research has shown that
per-instance algorithm selection based on exploratory landscape analysis (ELA)
can be an efficient mean to tackle this meta-problem. Existing approaches,
however, require the approximation of problem features based on a significant
number of samples, which are typically selected through uniform sampling or
Latin Hypercube Designs. The evaluation of these points is costly, and the
benefit of an ELA-based algorithm selection over a default algorithm must
therefore be significant in order to pay off. One could hope to by-pass the
evaluations for the feature approximations by using the samples that a default
algorithm would anyway perform, i.e., by using the points of the default
algorithm's trajectory.
We analyze in this paper how well such an approach can work. Concretely, we
test how accurately trajectory-based ELA approaches can predict the final
solution quality of the CMA-ES after a fixed budget of function evaluations. We
observe that the loss of trajectory-based predictions can be surprisingly small
compared to the classical global sampling approach, if the remaining budget for
which solution quality shall be predicted is not too large. Feature selection,
in contrast, did not show any advantage in our experiments and rather led to
worsened prediction accuracy. The inclusion of state variables of CMA-ES only
has a moderate effect on the prediction accuracy.
Related papers
- DynamoRep: Trajectory-Based Population Dynamics for Classification of
Black-box Optimization Problems [0.755972004983746]
We propose a feature extraction method that describes the trajectories of optimization algorithms using simple statistics.
We demonstrate that the proposed DynamoRep features capture enough information to identify the problem class on which the optimization algorithm is running.
arXiv Detail & Related papers (2023-06-08T06:57:07Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Per-run Algorithm Selection with Warm-starting using Trajectory-based
Features [5.073358743426584]
Per-instance algorithm selection seeks to recommend, for a given problem instance, one or several suitable algorithms.
We propose an online algorithm selection scheme which we coin per-run algorithm selection.
We show that our approach outperforms static per-instance algorithm selection.
arXiv Detail & Related papers (2022-04-20T14:30:42Z) - Trajectory-based Algorithm Selection with Warm-starting [2.3823600586675724]
We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances.
We show promising performance of the trajectory-based per-run algorithm selection with warm-starting.
arXiv Detail & Related papers (2022-04-13T14:00:55Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Benchmarking Feature-based Algorithm Selection Systems for Black-box
Numerical Optimization [0.0]
Feature-based algorithm selection aims to automatically find the best one from a portfolio of optimization algorithms on an unseen problem.
This paper analyzes algorithm selection systems on the 24 noiseless black-box optimization benchmarking functions.
We show that the performance of algorithm selection systems can be significantly improved by using sequential least squares programming as a pre-solver.
arXiv Detail & Related papers (2021-09-17T07:17:40Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Landscape-Aware Fixed-Budget Performance Regression and Algorithm
Selection for Modular CMA-ES Variants [1.0965065178451106]
We show that it is possible to achieve high-quality performance predictions with off-the-shelf supervised learning approaches.
We test this approach on a portfolio of very similar algorithms, which we choose from the family of modular CMA-ES algorithms.
arXiv Detail & Related papers (2020-06-17T13:34:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.