A machine learning based algorithm selection method to solve the minimum
cost flow problem
- URL: http://arxiv.org/abs/2210.02195v1
- Date: Mon, 3 Oct 2022 16:06:24 GMT
- Title: A machine learning based algorithm selection method to solve the minimum
cost flow problem
- Authors: Philipp Herrmann, Anna Meyer, Stefan Ruzika, Luca E. Sch\"afer and
Fabian von der Warth
- Abstract summary: We train several machine learning classifiers to predict the fastest among a given set of solvers.
It is shown that tree-based models appear to adapt and exploit the relevant structures of the minimum-cost flow problem.
- Score: 0.8399688944263843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The minimum cost flow problem is one of the most studied network optimization
problems and appears in numerous applications. Some efficient algorithms exist
for this problem, which are freely available in the form of libraries or
software packages. It is noticeable that none of these solvers is better than
the other solution methods on all instances. Thus, the question arises whether
the fastest algorithm can be selected for a given instance based on the
characteristics of the instance. To this end, we train several machine learning
classifiers to predict the fastest among a given set of solvers. We accomplish
this by creating a representative data set of 81,000 instances and
characterizing each of these instances by a vector of relevant features. To
achieve better performance, we conduct a grid search to optimize the
hyperparameters of the classifiers. Finally, we evaluate the different
classifiers by means of accuracy. It is shown that tree-based models appear to
adapt and exploit the relevant structures of the minimum-cost flow problem
particularly well on a large number of instances, predicting the fastest solver
with an accuracy of more than 90%.
Related papers
- Identifying Easy Instances to Improve Efficiency of ML Pipelines for Algorithm-Selection [0.20718016474717196]
We propose a method for identifying easy instances which can be solved quickly using a generalist solver without any need for algorithm-selection.
This saves computational budget associated with feature-computation which can then be used elsewhere in an AS pipeline.
arXiv Detail & Related papers (2024-06-24T12:25:04Z) - Frugal Algorithm Selection [1.079960007119637]
There is no single algorithm that works well for all instances of a problem.
In this work, we explore reducing this cost by choosing a subset of the training instances on which to train.
We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout.
arXiv Detail & Related papers (2024-05-17T19:23:30Z) - Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut [1.4624458429745086]
We build upon recent work in this line of research by considering the setup where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved.
In particular, given a representative sample of instances, we learn a neural network that maps an instance of the problem to the most appropriate algorithm for that instance.
In other words, the neural network will take as input a mixed-integer optimization instance and output a decision that will result in a small branch-and-cut tree for that instance.
arXiv Detail & Related papers (2024-02-04T03:03:27Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Approximate learning of high dimensional Bayesian network structures via
pruning of Candidate Parent Sets [6.85316573653194]
Approximate solutions exist because exact learning is not applicable to networks of moderate or higher complexity.
Some approximate algorithms are optimised to handle thousands of variables, but may still be unable to learn such high dimensional structures.
This paper explores a strategy towards pruning the size of candidate parent sets, aimed at high dimensionality problems.
arXiv Detail & Related papers (2020-06-08T17:09:18Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z) - Optimal Clustering from Noisy Binary Feedback [75.17453757892152]
We study the problem of clustering a set of items from binary user feedback.
We devise an algorithm with a minimal cluster recovery error rate.
For adaptive selection, we develop an algorithm inspired by the derivation of the information-theoretical error lower bounds.
arXiv Detail & Related papers (2019-10-14T09:18:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.