HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection
- URL: http://arxiv.org/abs/2210.17341v1
- Date: Mon, 31 Oct 2022 14:06:11 GMT
- Title: HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection
- Authors: Lukas Fehring, Jonas Hanselle, Alexander Tornede
- Abstract summary: We propose a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses.
HARRIS' decisions are based on a forest model, whose trees are created based on optimized on a hybrid ranking and regression loss function.
- Score: 75.84584400866254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is well known that different algorithms perform differently well on an
instance of an algorithmic problem, motivating algorithm selection (AS): Given
an instance of an algorithmic problem, which is the most suitable algorithm to
solve it? As such, the AS problem has received considerable attention resulting
in various approaches - many of which either solve a regression or ranking
problem under the hood. Although both of these formulations yield very natural
ways to tackle AS, they have considerable weaknesses. On the one hand,
correctly predicting the performance of an algorithm on an instance is a
sufficient, but not a necessary condition to produce a correct ranking over
algorithms and in particular ranking the best algorithm first. On the other
hand, classical ranking approaches often do not account for concrete
performance values available in the training data, but only leverage rankings
composed from such data. We propose HARRIS- Hybrid rAnking and RegRessIon
foreSts - a new algorithm selector leveraging special forests, combining the
strengths of both approaches while alleviating their weaknesses. HARRIS'
decisions are based on a forest model, whose trees are created based on splits
optimized on a hybrid ranking and regression loss function. As our preliminary
experimental study on ASLib shows, HARRIS improves over standard algorithm
selection approaches on some scenarios showing that combining ranking and
regression in trees is indeed promising for AS.
Related papers
- A Novel Ranking Scheme for the Performance Analysis of Stochastic Optimization Algorithms using the Principles of Severity [9.310464457958844]
We provide a novel ranking scheme to rank the algorithms over multiple single-objective optimization problems.
The results of the algorithms are compared using a robust bootstrapping-based hypothesis testing procedure.
arXiv Detail & Related papers (2024-05-31T19:35:34Z) - Improved Learning-Augmented Algorithms for the Multi-Option Ski Rental
Problem via Best-Possible Competitive Analysis [0.1529342790344802]
We present improved learning-augmented algorithms for the multi-option ski rental problem.
Our algorithm is based on a new, provably best-possible randomized competitive algorithm for the problem.
arXiv Detail & Related papers (2023-02-14T05:05:03Z) - Heuristic Search for Rank Aggregation with Application to Label Ranking [16.275063634853584]
We propose an effective hybrid evolutionary ranking algorithm to solve the rank aggregation problem.
The algorithm features a semantic crossover based on concordant pairs and a late acceptance local search reinforced by an efficient incremental evaluation technique.
Experiments are conducted to assess the algorithm, indicating a highly competitive performance on benchmark instances.
arXiv Detail & Related papers (2022-01-11T11:43:17Z) - Regression with Missing Data, a Comparison Study of TechniquesBased on
Random Forests [0.0]
In this paper we present the practical benefits of a new random forest algorithm to deal with missing values in the sample.
A variety of missing value mechanisms (such as MCAR,MAR, MNAR) are considered and simulated.
We study the quadratic errors and the bias ofour algorithm and compare it to the most popular missing values random forests algorithms in the literature.
arXiv Detail & Related papers (2021-10-18T14:02:15Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.