Sorting and Hypergraph Orientation under Uncertainty with Predictions
- URL: http://arxiv.org/abs/2305.09245v1
- Date: Tue, 16 May 2023 07:52:08 GMT
- Title: Sorting and Hypergraph Orientation under Uncertainty with Predictions
- Authors: Thomas Erlebach, Murilo Santos de Lima, Nicole Megow, Jens Schl\"oter
- Abstract summary: We study learning-augmented algorithms for sorting and hypergraph orientation under uncertainty.
Our algorithms provide improved performance guarantees for accurate predictions while maintaining worst-case guarantees that are best possible without predictions.
- Score: 0.45880283710344055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning-augmented algorithms have been attracting increasing interest, but
have only recently been considered in the setting of explorable uncertainty
where precise values of uncertain input elements can be obtained by a query and
the goal is to minimize the number of queries needed to solve a problem. We
study learning-augmented algorithms for sorting and hypergraph orientation
under uncertainty, assuming access to untrusted predictions for the uncertain
values. Our algorithms provide improved performance guarantees for accurate
predictions while maintaining worst-case guarantees that are best possible
without predictions. For hypergraph orientation, for any $\gamma \geq 2$, we
give an algorithm that achieves a competitive ratio of $1+1/\gamma$ for correct
predictions and $\gamma$ for arbitrarily wrong predictions. For sorting, we
achieve an optimal solution for accurate predictions while still being
$2$-competitive for arbitrarily wrong predictions. These tradeoffs are the best
possible. We also consider different error metrics and show that the
performance of our algorithms degrades smoothly with the prediction error in
all the cases where this is possible.
Related papers
- Fair Secretaries with Unfair Predictions [12.756552522270198]
We show that an algorithm can have zero probability of accepting the best candidate, which we deem unfair, despite promising to accept a candidate whose expected value is at least $maxOmega (1), 1 - O(epsilon)$ times the optimal value, where $epsilon$ is the prediction error.
Our algorithm and analysis are based on a new "pegging" idea that diverges from existing works and simplifies/unifies some of their results.
arXiv Detail & Related papers (2024-11-15T00:23:59Z) - Competitive Algorithms for Online Knapsack with Succinct Predictions [16.793099279933163]
In the online knapsack problem, the goal is to pack items arriving online with different values and weights into a capacity-limited knapsack to maximize the total value of the accepted items.
We study textitlearning-augmented algorithms for this problem, which aim to use machine-learned predictions to move beyond pessimistic worst-case guarantees.
arXiv Detail & Related papers (2024-06-26T20:38:00Z) - Non-clairvoyant Scheduling with Partial Predictions [17.387787159892287]
We present a learning-augmented algorithm satisfying the robustness, consistency, and smoothness criteria.
We also present a novel tradeoff between consistency and smoothness inherent in the scenario with a restricted number of predictions.
arXiv Detail & Related papers (2024-05-02T05:29:22Z) - Mixing predictions for online metric algorithms [34.849039387367455]
We design algorithms that combine predictions and are competitive against such dynamic combinations.
Our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time.
An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
arXiv Detail & Related papers (2023-04-04T13:18:00Z) - Streaming Algorithms for Learning with Experts: Deterministic Versus
Robust [62.98860182111096]
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times)
The goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set.
We show a space lower bound of $widetildeOmegaleft(fracnMRTright)$ for any deterministic algorithm that achieves regret $R$ when the best expert makes $M$ mistakes.
arXiv Detail & Related papers (2023-03-03T04:39:53Z) - Algorithms with Prediction Portfolios [23.703372221079306]
We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling.
For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.
arXiv Detail & Related papers (2022-10-22T12:58:07Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Malicious Experts versus the multiplicative weights algorithm in online
prediction [85.62472761361107]
We consider a prediction problem with two experts and a forecaster.
We assume that one of the experts is honest and makes correct prediction with probability $mu$ at each round.
The other one is malicious, who knows true outcomes at each round and makes predictions in order to maximize the loss of the forecaster.
arXiv Detail & Related papers (2020-03-18T20:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.