Personalizing Performance Regression Models to Black-Box Optimization
Problems
- URL: http://arxiv.org/abs/2104.10999v1
- Date: Thu, 22 Apr 2021 11:47:47 GMT
- Title: Personalizing Performance Regression Models to Black-Box Optimization
Problems
- Authors: Tome Eftimov, Anja Jankovic, Gorjan Popovski, Carola Doerr, Peter
Koro\v{s}ec
- Abstract summary: In this work, we propose a personalized regression approach for numerical optimization problems.
We also investigate the impact of selecting not a single regression model per problem, but personalized ensembles.
We test our approach on predicting the performance of numerical optimizations on the BBOB benchmark collection.
- Score: 0.755972004983746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately predicting the performance of different optimization algorithms
for previously unseen problem instances is crucial for high-performing
algorithm selection and configuration techniques. In the context of numerical
optimization, supervised regression approaches built on top of exploratory
landscape analysis are becoming very popular. From the point of view of Machine
Learning (ML), however, the approaches are often rather naive, using default
regression or classification techniques without proper investigation of the
suitability of the ML tools. With this work, we bring to the attention of our
community the possibility to personalize regression models to specific types of
optimization problems. Instead of aiming for a single model that works well
across a whole set of possibly diverse problems, our personalized regression
approach acknowledges that different models may suite different types of
problems. Going one step further, we also investigate the impact of selecting
not a single regression model per problem, but personalized ensembles. We test
our approach on predicting the performance of numerical optimization heuristics
on the BBOB benchmark collection.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - The Importance of Landscape Features for Performance Prediction of
Modular CMA-ES Variants [2.3823600586675724]
Recent studies show that supervised machine learning methods can predict algorithm performance using landscape features extracted from the problem instances.
We consider the modular CMA-ES framework and estimate how much each landscape feature contributes to the best algorithm performance regression models.
arXiv Detail & Related papers (2022-04-15T11:55:28Z) - Explainable Landscape Analysis in Automated Algorithm Performance
Prediction [0.0]
We investigate the expressiveness of problem landscape features utilized by different supervised machine learning models in automated algorithm performance prediction.
The experimental results point out that the selection of the supervised ML method is crucial, since different supervised ML regression models utilize the problem landscape features differently.
arXiv Detail & Related papers (2022-03-22T15:54:17Z) - A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization
Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box
Design Problems [0.0]
In existing work, a weighted Tchebycheff MOBO approach has been demonstrated which attempts to estimate the unknown utopia in formulating acquisition function.
We propose a nested weighted Tchebycheff MOBO framework where we build a regression model selection procedure from an ensemble of models.
arXiv Detail & Related papers (2021-10-16T00:44:06Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Robust priors for regularized regression [12.945710636153537]
Penalized regression approaches like ridge regression shrink toward zero but zero weights is usually not a sensible prior.
Inspired by simple and robust decisions humans use, we constructed non-zero priors for penalized regression models.
Models with robust priors had excellent worst-case performance.
arXiv Detail & Related papers (2020-10-06T10:43:14Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Landscape-Aware Fixed-Budget Performance Regression and Algorithm
Selection for Modular CMA-ES Variants [1.0965065178451106]
We show that it is possible to achieve high-quality performance predictions with off-the-shelf supervised learning approaches.
We test this approach on a portfolio of very similar algorithms, which we choose from the family of modular CMA-ES algorithms.
arXiv Detail & Related papers (2020-06-17T13:34:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.