Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised
Pretrained Transformers for Single- and Multi-Objective Continuous
Optimization Problems
- URL: http://arxiv.org/abs/2401.01192v1
- Date: Tue, 2 Jan 2024 12:41:17 GMT
- Title: Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised
Pretrained Transformers for Single- and Multi-Objective Continuous
Optimization Problems
- Authors: Moritz Vinzent Seiler and Pascal Kerschke and Heike Trautmann
- Abstract summary: We propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features.
Our proposed framework can either be used out-of-the-box for analyzing single- and multi-objective continuous optimization problems, or subsequently fine-tuned to various tasks.
- Score: 0.6138671548064356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many recent works, the potential of Exploratory Landscape Analysis (ELA)
features to numerically characterize, in particular, single-objective
continuous optimization problems has been demonstrated. These numerical
features provide the input for all kinds of machine learning tasks on
continuous optimization problems, ranging, i.a., from High-level Property
Prediction to Automated Algorithm Selection and Automated Algorithm
Configuration. Without ELA features, analyzing and understanding the
characteristics of single-objective continuous optimization problems would be
impossible.
Yet, despite their undisputed usefulness, ELA features suffer from several
drawbacks. These include, in particular, (1.) a strong correlation between
multiple features, as well as (2.) its very limited applicability to
multi-objective continuous optimization problems. As a remedy, recent works
proposed deep learning-based approaches as alternatives to ELA. In these works,
e.g., point-cloud transformers were used to characterize an optimization
problem's fitness landscape. However, these approaches require a large amount
of labeled training data.
Within this work, we propose a hybrid approach, Deep-ELA, which combines (the
benefits of) deep learning and ELA features. Specifically, we pre-trained four
transformers on millions of randomly generated optimization problems to learn
deep representations of the landscapes of continuous single- and
multi-objective optimization problems. Our proposed framework can either be
used out-of-the-box for analyzing single- and multi-objective continuous
optimization problems, or subsequently fine-tuned to various tasks focussing on
algorithm behavior and problem understanding.
Related papers
- UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - DiffVL: Scaling Up Soft Body Manipulation using Vision-Language Driven
Differentiable Physics [69.6158232150048]
DiffVL is a method that enables non-expert users to communicate soft-body manipulation tasks.
We leverage large language models to translate task descriptions into machine-interpretable optimization objectives.
arXiv Detail & Related papers (2023-12-11T14:29:25Z) - Multi-Objective Optimization for Sparse Deep Multi-Task Learning [0.0]
We present a Multi-Objective Optimization algorithm using a modified Weighted Chebyshev scalarization for training Deep Neural Networks (DNNs)
Our work aims to address the (economical and also ecological) sustainability issue of DNN models, with particular focus on Deep Multi-Task models.
arXiv Detail & Related papers (2023-08-23T16:42:27Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - A Study of Scalarisation Techniques for Multi-Objective QUBO Solving [0.0]
Quantum and quantum-inspired optimisation algorithms have shown promising performance when applied to academic benchmarks as well as real-world problems.
However, QUBO solvers are single objective solvers. To make them more efficient at solving problems with multiple objectives, a decision on how to convert such multi-objective problems to single-objective problems need to be made.
arXiv Detail & Related papers (2022-10-20T14:54:37Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Empirical Study on the Benefits of Multiobjectivization for Solving
Single-Objective Problems [0.0]
Local optima are often preventing algorithms from making progress and thus pose a severe threat.
With the use of a sophisticated visualization technique based on the multi-objective gradients, the properties of the arising multi-objective landscapes are illustrated and examined.
We will empirically show that the multi-objective COCO MOGSA is able to exploit these properties to overcome local traps.
arXiv Detail & Related papers (2020-06-25T14:04:37Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.