(Machine) Learning to Improve the Empirical Performance of Discrete
Algorithms
- URL: http://arxiv.org/abs/2109.14271v1
- Date: Wed, 29 Sep 2021 08:33:09 GMT
- Title: (Machine) Learning to Improve the Empirical Performance of Discrete
Algorithms
- Authors: Imran Adham, Jesus De Loera, Zhenyang Zhang
- Abstract summary: We train machine learning methods to select the optimal algorithm for given data without human expert opinion.
Our framework recommends various pivot rules that improve overall total performance over just using a fixed default pivot rule.
For the all-pairs shortest path problem, the models trained made a large improvement and our selection is on average.07 percent away from the optimal choice.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper discusses a data-driven, empirically-based framework to make
algorithmic decisions or recommendations without expert knowledge. We improve
the performance of two algorithmic case studies: the selection of a pivot rule
for the Simplex method and the selection of an all-pair shortest paths
algorithm. We train machine learning methods to select the optimal algorithm
for given data without human expert opinion. We use two types of techniques,
neural networks and boosted decision trees. We concluded, based on our
experiments, that:
1) Our selection framework recommends various pivot rules that improve
overall total performance over just using a fixed default pivot rule.
Over many years experts identified steepest-edge pivot rule as a favorite
pivot rule. Our data analysis corroborates that the number of iterations by
steepest-edge is no more than 4 percent more than the optimal selection which
corroborates human expert knowledge, but this time the knowledge was obtained
using machine learning. Here our recommendation system is best when using
gradient boosted trees.
2) For the all-pairs shortest path problem, the models trained made a large
improvement and our selection is on average .07 percent away from the optimal
choice. The conclusions do not seem to be affected by the machine learning
method we used.
We tried to make a parallel analysis of both algorithmic problems, but it is
clear that there are intrinsic differences. For example, in the all-pairs
shortest path problem the graph density is a reasonable predictor, but there is
no analogous single parameter for decisions in the Simplex method.
Related papers
- Meta-Learning from Learning Curves for Budget-Limited Algorithm Selection [11.409496019407067]
In a budget-limited scenario, it is crucial to carefully select an algorithm candidate and allocate a budget for training it.
We propose a novel framework in which an agent must select in the process of learning the most promising algorithm without waiting until it is fully trained.
arXiv Detail & Related papers (2024-10-10T08:09:58Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Efficient and Optimal Algorithms for Contextual Dueling Bandits under
Realizability [59.81339109121384]
We study the $K$ contextual dueling bandit problem, a sequential decision making setting in which the learner uses contextual information to make two decisions, but only observes emphpreference-based feedback suggesting that one decision was better than the other.
We provide a new algorithm that achieves the optimal regret rate for a new notion of best response regret, which is a strictly stronger performance measure than those considered in prior works.
arXiv Detail & Related papers (2021-11-24T07:14:57Z) - Leveraging Experience in Lazy Search [37.75223642505171]
Lazy graph search algorithms are efficient at solving motion planning problems where edge evaluation is the computational bottleneck.
We formulate this problem as a Markov Decision Process (MDP) on the state of the search problem.
We show that we can compute oracular selectors that can solve the MDP during training.
arXiv Detail & Related papers (2021-10-10T00:46:44Z) - Learning Primal Heuristics for Mixed Integer Programs [5.766851255770718]
We investigate whether effective primals can be automatically learned via machine learning.
We propose a new method to represent an optimization problem as a graph, and train a Graph Conal Network on solved problem instances with known optimal solutions.
The prediction of variable solutions is then leveraged by a novel configuration of the B&B method, Probabilistic Branching with guided Depth-first Search.
arXiv Detail & Related papers (2021-07-02T06:46:23Z) - Learning to Sparsify Travelling Salesman Problem Instances [0.5985204759362747]
We use a pruning machine learning as a pre-processing step followed by an exact Programming approach to sparsify the travelling salesman problem.
Our learning approach requires very little training data and is amenable to mathematical analysis.
arXiv Detail & Related papers (2021-04-19T14:35:14Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z) - Optimal Sparse Decision Trees [25.043477914272046]
This work introduces the first practical algorithm for optimal decision trees for binary variables.
The algorithm is a co-design of analytical bounds that reduce the search space and modern systems techniques.
Our experiments highlight advantages in scalability, speed, and proof of optimality.
arXiv Detail & Related papers (2019-04-29T17:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.