Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis
- URL: http://arxiv.org/abs/2007.02816v2
- Date: Fri, 10 Jul 2020 08:07:47 GMT
- Title: Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis
- Authors: Alexander Tornede, Marcel Wever, Stefan Werner, Felix Mohr, Eyke
H\"ullermeier
- Abstract summary: survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
- Score: 75.64261155172856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithm selection (AS) deals with the automatic selection of an algorithm
from a fixed set of candidate algorithms most suitable for a specific instance
of an algorithmic problem class, where "suitability" often refers to an
algorithm's runtime. Due to possibly extremely long runtimes of candidate
algorithms, training data for algorithm selection models is usually generated
under time constraints in the sense that not all algorithms are run to
completion on all instances. Thus, training data usually comprises censored
information, as the true runtime of algorithms timed out remains unknown.
However, many standard AS approaches are not able to handle such information in
a proper way. On the other side, survival analysis (SA) naturally supports
censored data and offers appropriate ways to use such data for learning
distributional models of algorithm runtime, as we demonstrate in this work. We
leverage such models as a basis of a sophisticated decision-theoretic approach
to algorithm selection, which we dub Run2Survive. Moreover, taking advantage of
a framework of this kind, we advocate a risk-averse approach to algorithm
selection, in which the avoidance of a timeout is given high priority. In an
extensive experimental study with the standard benchmark ASlib, our approach is
shown to be highly competitive and in many cases even superior to
state-of-the-art AS approaches.
Related papers
- Frugal Algorithm Selection [1.079960007119637]
There is no single algorithm that works well for all instances of a problem.
In this work, we explore reducing this cost by choosing a subset of the training instances on which to train.
We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout.
arXiv Detail & Related papers (2024-05-17T19:23:30Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Algorithm Selection on a Meta Level [58.720142291102135]
We introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors.
We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework.
arXiv Detail & Related papers (2021-07-20T11:23:21Z) - Leveraging Benchmarking Data for Informed One-Shot Dynamic Algorithm
Selection [0.9281671380673306]
A key challenge in the application of evolutionary algorithms is the selection of an algorithm instance that best suits the problem at hand.
We analyze in this work how such prior performance data can be used to infer informed dynamic algorithm selection schemes for the solution of pseudo-Boolean optimization problems.
arXiv Detail & Related papers (2021-02-12T12:27:02Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.