Improving Automated Algorithm Selection by Advancing Fitness Landscape
Analysis
- URL: http://arxiv.org/abs/2312.03105v1
- Date: Tue, 5 Dec 2023 19:53:25 GMT
- Title: Improving Automated Algorithm Selection by Advancing Fitness Landscape
Analysis
- Authors: Raphael Patrick Prager
- Abstract summary: I identify and address current issues with the body of work to strengthen the foundation that future work builds upon.
The rise of deep learning offers ample opportunities for automated algorithm selection.
I propose a method to extend the generation of informative inputs to other problem types.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization is ubiquitous in our daily lives. In the past, (sub-)optimal
solutions to any problem have been derived by trial and error, sheer luck, or
the expertise of knowledgeable individuals. In our contemporary age, there
thankfully exists a plethora of different algorithms that can find solutions
more reliably than ever before. Yet, choosing an appropriate algorithm for any
given problem is challenging in itself. The field of automated algorithm
selection provides various approaches to tackle this latest problem. This is
done by delegating the selection of a suitable algorithm for a given problem to
a complex computer model. This computer model is generated through the use of
Artificial Intelligence. Many of these computer models rely on some sort of
information about the problem to make a reasonable selection. Various methods
exist to provide this informative input to the computer model in the form of
numerical data.
In this cumulative dissertation, I propose several improvements to the
different variants of informative inputs. This in turn enhances and refines the
current state-of-the-art of automated algorithm selection. Specifically, I
identify and address current issues with the existing body of work to
strengthen the foundation that future work builds upon. Furthermore, the rise
of deep learning offers ample opportunities for automated algorithm selection.
In several joint works, my colleagues and I developed and evaluated several
different methods that replace the existing methods to extract an informative
input. Lastly, automated algorithm selection approaches have been restricted to
certain types of problems. I propose a method to extend the generation of
informative inputs to other problem types and provide an outlook on further
promising research directions.
Related papers
- A Survey of Meta-features Used for Automated Selection of Algorithms for Black-box Single-objective Continuous Optimization [4.173197621837912]
We conduct an overview of the key contributions to algorithm selection in the field of single-objective continuous black-box optimization.
We study machine learning models for automated algorithm selection, configuration, and performance prediction.
arXiv Detail & Related papers (2024-06-08T11:11:14Z) - Frugal Algorithm Selection [1.079960007119637]
There is no single algorithm that works well for all instances of a problem.
In this work, we explore reducing this cost by choosing a subset of the training instances on which to train.
We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout.
arXiv Detail & Related papers (2024-05-17T19:23:30Z) - Problem-Solving Guide: Predicting the Algorithm Tags and Difficulty for Competitive Programming Problems [7.955313479061445]
Most tech companies require the ability to solve algorithm problems including Google, Meta, and Amazon.
Our study addresses the task of predicting the algorithm tag as a useful tool for engineers and developers.
We also consider predicting the difficulty levels of algorithm problems, which can be used as useful guidance to calculate the required time to solve that problem.
arXiv Detail & Related papers (2023-10-09T15:26:07Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Algorithm Selection on a Meta Level [58.720142291102135]
We introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors.
We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework.
arXiv Detail & Related papers (2021-07-20T11:23:21Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.