Landscape Features in Single-Objective Continuous Optimization: Have We Hit a Wall in Algorithm Selection Generalization?
- URL: http://arxiv.org/abs/2501.17663v1
- Date: Wed, 29 Jan 2025 14:03:27 GMT
- Title: Landscape Features in Single-Objective Continuous Optimization: Have We Hit a Wall in Algorithm Selection Generalization?
- Authors: Gjorgjina Cenikj, Gašper Petelin, Moritz Seiler, Nikola Cenikj, Tome Eftimov,
- Abstract summary: This study evaluates the generalizability of AS models based on different problem representations.
It considers the most widely used Exploratory Landscape Analysis features, as well as recently proposed Topological Landscape Analysis features.
- Score: 4.510532471907222
- License:
- Abstract: %% Text of abstract The process of identifying the most suitable optimization algorithm for a specific problem, referred to as algorithm selection (AS), entails training models that leverage problem landscape features to forecast algorithm performance. A significant challenge in this domain is ensuring that AS models can generalize effectively to novel, unseen problems. This study evaluates the generalizability of AS models based on different problem representations in the context of single-objective continuous optimization. In particular, it considers the most widely used Exploratory Landscape Analysis features, as well as recently proposed Topological Landscape Analysis features, and features based on deep learning, such as DeepELA, TransOptAS and Doe2Vec. Our results indicate that when presented with out-of-distribution evaluation data, none of the feature-based AS models outperform a simple baseline model, i.e., a Single Best Solver.
Related papers
- Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - The Importance of Landscape Features for Performance Prediction of
Modular CMA-ES Variants [2.3823600586675724]
Recent studies show that supervised machine learning methods can predict algorithm performance using landscape features extracted from the problem instances.
We consider the modular CMA-ES framework and estimate how much each landscape feature contributes to the best algorithm performance regression models.
arXiv Detail & Related papers (2022-04-15T11:55:28Z) - Explainable Landscape Analysis in Automated Algorithm Performance
Prediction [0.0]
We investigate the expressiveness of problem landscape features utilized by different supervised machine learning models in automated algorithm performance prediction.
The experimental results point out that the selection of the supervised ML method is crucial, since different supervised ML regression models utilize the problem landscape features differently.
arXiv Detail & Related papers (2022-03-22T15:54:17Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Explainable Landscape-Aware Optimization Performance Prediction [0.0]
We are investigating explainable landscape-aware regression models.
The contribution of each landscape feature to the prediction of the optimization algorithm performance is estimated on a global and local level.
The results show a proof of concept that different set of features are important for different problem instances.
arXiv Detail & Related papers (2021-10-22T07:46:33Z) - Personalizing Performance Regression Models to Black-Box Optimization
Problems [0.755972004983746]
In this work, we propose a personalized regression approach for numerical optimization problems.
We also investigate the impact of selecting not a single regression model per problem, but personalized ensembles.
We test our approach on predicting the performance of numerical optimizations on the BBOB benchmark collection.
arXiv Detail & Related papers (2021-04-22T11:47:47Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Landscape-Aware Fixed-Budget Performance Regression and Algorithm
Selection for Modular CMA-ES Variants [1.0965065178451106]
We show that it is possible to achieve high-quality performance predictions with off-the-shelf supervised learning approaches.
We test this approach on a portfolio of very similar algorithms, which we choose from the family of modular CMA-ES algorithms.
arXiv Detail & Related papers (2020-06-17T13:34:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.