Explainable Landscape-Aware Optimization Performance Prediction
- URL: http://arxiv.org/abs/2110.11633v1
- Date: Fri, 22 Oct 2021 07:46:33 GMT
- Title: Explainable Landscape-Aware Optimization Performance Prediction
- Authors: Risto Trajanov and Stefan Dimeski and Martin Popovski and Peter
Koro\v{s}ec and Tome Eftimov
- Abstract summary: We are investigating explainable landscape-aware regression models.
The contribution of each landscape feature to the prediction of the optimization algorithm performance is estimated on a global and local level.
The results show a proof of concept that different set of features are important for different problem instances.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Efficient solving of an unseen optimization problem is related to appropriate
selection of an optimization algorithm and its hyper-parameters. For this
purpose, automated algorithm performance prediction should be performed that in
most commonly-applied practices involves training a supervised ML algorithm
using a set of problem landscape features. However, the main issue of training
such models is their limited explainability since they only provide information
about the joint impact of the set of landscape features to the end prediction
results. In this study, we are investigating explainable landscape-aware
regression models where the contribution of each landscape feature to the
prediction of the optimization algorithm performance is estimated on a global
and local level. The global level provides information about the impact of the
feature across all benchmark problems' instances, while the local level
provides information about the impact on a specific problem instance. The
experimental results are obtained using the COCO benchmark problems and three
differently configured modular CMA-ESs. The results show a proof of concept
that different set of features are important for different problem instances,
which indicates that further personalization of the landscape space is required
when training an automated algorithm performance prediction model.
Related papers
- Learning Loss Landscapes in Preference Optimization [39.15940594751445]
We present an empirical study investigating how specific properties of preference datasets, such as mixed-quality or noisy data, affect the performance of Preference Optimization (PO) algorithms.
Our experiments, conducted in MuJoCo environments, reveal several scenarios where state-of-the-art PO methods experience significant drops in performance.
Within this framework, we employ evolutionary strategies to discover new loss functions capable of handling the identified problematic scenarios.
arXiv Detail & Related papers (2024-11-10T19:11:48Z) - Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - The Importance of Landscape Features for Performance Prediction of
Modular CMA-ES Variants [2.3823600586675724]
Recent studies show that supervised machine learning methods can predict algorithm performance using landscape features extracted from the problem instances.
We consider the modular CMA-ES framework and estimate how much each landscape feature contributes to the best algorithm performance regression models.
arXiv Detail & Related papers (2022-04-15T11:55:28Z) - Explainable Landscape Analysis in Automated Algorithm Performance
Prediction [0.0]
We investigate the expressiveness of problem landscape features utilized by different supervised machine learning models in automated algorithm performance prediction.
The experimental results point out that the selection of the supervised ML method is crucial, since different supervised ML regression models utilize the problem landscape features differently.
arXiv Detail & Related papers (2022-03-22T15:54:17Z) - Personalizing Performance Regression Models to Black-Box Optimization
Problems [0.755972004983746]
In this work, we propose a personalized regression approach for numerical optimization problems.
We also investigate the impact of selecting not a single regression model per problem, but personalized ensembles.
We test our approach on predicting the performance of numerical optimizations on the BBOB benchmark collection.
arXiv Detail & Related papers (2021-04-22T11:47:47Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Landscape-Aware Fixed-Budget Performance Regression and Algorithm
Selection for Modular CMA-ES Variants [1.0965065178451106]
We show that it is possible to achieve high-quality performance predictions with off-the-shelf supervised learning approaches.
We test this approach on a portfolio of very similar algorithms, which we choose from the family of modular CMA-ES algorithms.
arXiv Detail & Related papers (2020-06-17T13:34:57Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.