Explainable Landscape Analysis in Automated Algorithm Performance
Prediction
- URL: http://arxiv.org/abs/2203.11828v1
- Date: Tue, 22 Mar 2022 15:54:17 GMT
- Title: Explainable Landscape Analysis in Automated Algorithm Performance
Prediction
- Authors: Risto Trajanov and Stefan Dimeski and Martin Popovski and Peter
Koro\v{s}ec and Tome Eftimov
- Abstract summary: We investigate the expressiveness of problem landscape features utilized by different supervised machine learning models in automated algorithm performance prediction.
The experimental results point out that the selection of the supervised ML method is crucial, since different supervised ML regression models utilize the problem landscape features differently.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Predicting the performance of an optimization algorithm on a new problem
instance is crucial in order to select the most appropriate algorithm for
solving that problem instance. For this purpose, recent studies learn a
supervised machine learning (ML) model using a set of problem landscape
features linked to the performance achieved by the optimization algorithm.
However, these models are black-box with the only goal of achieving good
predictive performance, without providing explanations which landscape features
contribute the most to the prediction of the performance achieved by the
optimization algorithm. In this study, we investigate the expressiveness of
problem landscape features utilized by different supervised ML models in
automated algorithm performance prediction. The experimental results point out
that the selection of the supervised ML method is crucial, since different
supervised ML regression models utilize the problem landscape features
differently and there is no common pattern with regard to which landscape
features are the most informative.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - A Survey of Meta-features Used for Automated Selection of Algorithms for Black-box Single-objective Continuous Optimization [4.173197621837912]
We conduct an overview of the key contributions to algorithm selection in the field of single-objective continuous black-box optimization.
We study machine learning models for automated algorithm selection, configuration, and performance prediction.
arXiv Detail & Related papers (2024-06-08T11:11:14Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - The Importance of Landscape Features for Performance Prediction of
Modular CMA-ES Variants [2.3823600586675724]
Recent studies show that supervised machine learning methods can predict algorithm performance using landscape features extracted from the problem instances.
We consider the modular CMA-ES framework and estimate how much each landscape feature contributes to the best algorithm performance regression models.
arXiv Detail & Related papers (2022-04-15T11:55:28Z) - Explainable Landscape-Aware Optimization Performance Prediction [0.0]
We are investigating explainable landscape-aware regression models.
The contribution of each landscape feature to the prediction of the optimization algorithm performance is estimated on a global and local level.
The results show a proof of concept that different set of features are important for different problem instances.
arXiv Detail & Related papers (2021-10-22T07:46:33Z) - Personalizing Performance Regression Models to Black-Box Optimization
Problems [0.755972004983746]
In this work, we propose a personalized regression approach for numerical optimization problems.
We also investigate the impact of selecting not a single regression model per problem, but personalized ensembles.
We test our approach on predicting the performance of numerical optimizations on the BBOB benchmark collection.
arXiv Detail & Related papers (2021-04-22T11:47:47Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Landscape-Aware Fixed-Budget Performance Regression and Algorithm
Selection for Modular CMA-ES Variants [1.0965065178451106]
We show that it is possible to achieve high-quality performance predictions with off-the-shelf supervised learning approaches.
We test this approach on a portfolio of very similar algorithms, which we choose from the family of modular CMA-ES algorithms.
arXiv Detail & Related papers (2020-06-17T13:34:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.