Learnable and Instance-Robust Predictions for Online Matching, Flows and
Load Balancing
- URL: http://arxiv.org/abs/2011.11743v2
- Date: Fri, 2 Jul 2021 01:59:10 GMT
- Title: Learnable and Instance-Robust Predictions for Online Matching, Flows and
Load Balancing
- Authors: Thomas Lavastida, Benjamin Moseley, R. Ravi and Chenyang Xu
- Abstract summary: We propose a new model for augmenting algorithms with predictions by requiring that they are formally learnable and instance robust.
We design online algorithms with predictions for a network flow allocation problem and restricted assignment makespan minimization.
- Score: 12.961453245099044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new model for augmenting algorithms with predictions by
requiring that they are formally learnable and instance robust. Learnability
ensures that predictions can be efficiently constructed from a reasonable
amount of past data. Instance robustness ensures that the prediction is robust
to modest changes in the problem input, where the measure of the change may be
problem specific. Instance robustness insists on a smooth degradation in
performance as a function of the change. Ideally, the performance is never
worse than worst-case bounds. This also allows predictions to be objectively
compared.
We design online algorithms with predictions for a network flow allocation
problem and restricted assignment makespan minimization. For both problems, two
key properties are established: high quality predictions can be learned from a
small sample of prior instances and these predictions are robust to errors that
smoothly degrade as the underlying problem instance changes.
Related papers
- Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - A Universal Error Measure for Input Predictions Applied to Online Graph
Problems [57.58926849872494]
We introduce a novel measure for quantifying the error in input predictions.
The measure captures errors due to absent predicted requests as well as unpredicted actual requests.
arXiv Detail & Related papers (2022-05-25T15:24:03Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - Backward-Compatible Prediction Updates: A Probabilistic Approach [12.049279991559091]
We formalize the Prediction Update Problem and present an efficient probabilistic approach as answer to the above questions.
In extensive experiments on standard classification benchmark data sets, we show that our method outperforms alternative strategies for backward-compatible prediction updates.
arXiv Detail & Related papers (2021-07-02T13:05:31Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Bayes DistNet -- A Robust Neural Network for Algorithm Runtime
Distribution Predictions [1.8275108630751844]
Randomized algorithms are used in many state-of-the-art solvers for constraint satisfaction problems (CSP) and Boolean satisfiability (SAT) problems.
Previous state-of-the-art methods directly try to predict a fixed parametric distribution that the input instance follows.
This new model achieves robust predictive performance in the low observation setting, as well as handling censored observations.
arXiv Detail & Related papers (2020-12-14T01:15:39Z) - Predictive Business Process Monitoring via Generative Adversarial Nets:
The Case of Next Event Prediction [0.026249027950824504]
This paper proposes a novel adversarial training framework to address the problem of next event prediction.
It works by putting one neural network against the other in a two-player game which leads to predictions that are indistinguishable from the ground truth.
It systematically outperforms all baselines both in terms of accuracy and earliness of the prediction, despite using a simple network architecture and a naive feature encoding.
arXiv Detail & Related papers (2020-03-25T08:31:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.