Linear Matrix Factorization Embeddings for Single-objective Optimization
Landscapes
- URL: http://arxiv.org/abs/2009.14506v1
- Date: Wed, 30 Sep 2020 08:46:54 GMT
- Title: Linear Matrix Factorization Embeddings for Single-objective Optimization
Landscapes
- Authors: Tome Eftimov, Gorjan Popovski, Quentin Renau, Peter Korosec, Carola
Doerr
- Abstract summary: In black-box optimization, features have to be derived from a set of $(x,f(x))$ samples.
We show that a linear dimensionality reduction via matrix factorization significantly contributes towards a better detection of correlation between different problem instances.
- Score: 0.755972004983746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated per-instance algorithm selection and configuration have shown
promising performances for a number of classic optimization problems, including
satisfiability, AI planning, and TSP. The techniques often rely on a set of
features that measure some characteristics of the problem instance at hand. In
the context of black-box optimization, these features have to be derived from a
set of $(x,f(x))$ samples. A number of different features have been proposed in
the literature, measuring, for example, the modality, the separability, or the
ruggedness of the instance at hand. Several of the commonly used features,
however, are highly correlated. While state-of-the-art machine learning
techniques can routinely filter such correlations, they hinder explainability
of the derived algorithm design techniques.
We therefore propose in this work to pre-process the measured (raw) landscape
features through representation learning. More precisely, we show that a linear
dimensionality reduction via matrix factorization significantly contributes
towards a better detection of correlation between different problem instances
-- a key prerequisite for successful automated algorithm design.
Related papers
- Interpretable Linear Dimensionality Reduction based on Bias-Variance
Analysis [45.3190496371625]
We propose a principled dimensionality reduction approach that maintains the interpretability of the resulting features.
In this way, all features are considered, the dimensionality is reduced and the interpretability is preserved.
arXiv Detail & Related papers (2023-03-26T14:30:38Z) - An evaluation framework for dimensionality reduction through sectional
curvature [59.40521061783166]
In this work, we aim to introduce the first highly non-supervised dimensionality reduction performance metric.
To test its feasibility, this metric has been used to evaluate the performance of the most commonly used dimension reduction algorithms.
A new parameterized problem instance generator has been constructed in the form of a function generator.
arXiv Detail & Related papers (2023-03-17T11:59:33Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Learning for Spatial Branching: An Algorithm Selection Approach [0.0]
We develop a learning framework for branching and show its efficacy in the context of non-linear optimization problems.
The proposed learning is performed offline, based on instance-specific features and with no computational overhead when solving new instances.
Experiments on different benchmark instances from the show literature that the learning-based branching rule significantly outperforms the standard rules.
arXiv Detail & Related papers (2022-04-22T17:23:43Z) - The Importance of Landscape Features for Performance Prediction of
Modular CMA-ES Variants [2.3823600586675724]
Recent studies show that supervised machine learning methods can predict algorithm performance using landscape features extracted from the problem instances.
We consider the modular CMA-ES framework and estimate how much each landscape feature contributes to the best algorithm performance regression models.
arXiv Detail & Related papers (2022-04-15T11:55:28Z) - Explainable Landscape Analysis in Automated Algorithm Performance
Prediction [0.0]
We investigate the expressiveness of problem landscape features utilized by different supervised machine learning models in automated algorithm performance prediction.
The experimental results point out that the selection of the supervised ML method is crucial, since different supervised ML regression models utilize the problem landscape features differently.
arXiv Detail & Related papers (2022-03-22T15:54:17Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Landscape-Aware Fixed-Budget Performance Regression and Algorithm
Selection for Modular CMA-ES Variants [1.0965065178451106]
We show that it is possible to achieve high-quality performance predictions with off-the-shelf supervised learning approaches.
We test this approach on a portfolio of very similar algorithms, which we choose from the family of modular CMA-ES algorithms.
arXiv Detail & Related papers (2020-06-17T13:34:57Z) - Discovering Representations for Black-box Optimization [73.59962178534361]
We show that black-box optimization encodings can be automatically learned, rather than hand designed.
We show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites.
arXiv Detail & Related papers (2020-03-09T20:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.