Pareto-Optimal Learning-Augmented Algorithms for Online Conversion
Problems
- URL: http://arxiv.org/abs/2109.01556v1
- Date: Fri, 3 Sep 2021 14:27:33 GMT
- Title: Pareto-Optimal Learning-Augmented Algorithms for Online Conversion
Problems
- Authors: Bo Sun, Russell Lee, Mohammad Hajiesmaili, Adam Wierman, Danny H.K.
Tsang
- Abstract summary: We design competitive algorithms for online conversion problems with the goal of improving the competitive ratio when predictions are accurate.
We also guarantee a worst-case competitive ratio regardless of the prediction quality.
We demonstrate the performance of OTA using numerical experiments on Bitcoin conversion.
- Score: 13.593621603504847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper leverages machine-learned predictions to design competitive
algorithms for online conversion problems with the goal of improving the
competitive ratio when predictions are accurate (i.e., consistency), while also
guaranteeing a worst-case competitive ratio regardless of the prediction
quality (i.e., robustness). We unify the algorithmic design of both integral
and fractional conversion problems, which are also known as the 1-max-search
and one-way trading problems, into a class of online threshold-based algorithms
(OTA). By incorporating predictions into design of OTA, we achieve the
Pareto-optimal trade-off of consistency and robustness, i.e., no online
algorithm can achieve a better consistency guarantee given for a robustness
guarantee. We demonstrate the performance of OTA using numerical experiments on
Bitcoin conversion.
Related papers
- Overcoming Brittleness in Pareto-Optimal Learning-Augmented Algorithms [6.131022957085439]
We propose a new framework in which the smoothness in the performance of the algorithm is enforced by means of a user-specified profile.
We apply this new approach to a well-studied online problem, namely the one-way trading problem.
arXiv Detail & Related papers (2024-08-07T23:15:21Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Adversarial Deep Learning for Online Resource Allocation [12.118811903399951]
We use deep neural networks to learn an online algorithm for a resource allocation and pricing problem from scratch.
Our work is the first using deep neural networks to design an online algorithm from the perspective of worst-case performance guarantee.
arXiv Detail & Related papers (2021-11-19T15:48:43Z) - Instantaneous Grammatical Error Correction with Shallow Aggressive
Decoding [57.08875260900373]
We propose Shallow Aggressive Decoding (SAD) to improve the online inference efficiency of the Transformer for instantaneous Grammatical Error Correction (GEC)
SAD aggressively decodes as many tokens as possible in parallel instead of always decoding only one token in each step to improve computational parallelism.
Experiments in both English and Chinese GEC benchmarks show that aggressive decoding could yield the same predictions but with a significant speedup for online inference.
arXiv Detail & Related papers (2021-06-09T10:30:59Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z) - Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by
Enabling Input-Adaptive Inference [119.19779637025444]
Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images)
This paper studies multi-exit networks associated with input-adaptive inference, showing their strong promise in achieving a "sweet point" in cooptimizing model accuracy, robustness and efficiency.
arXiv Detail & Related papers (2020-02-24T00:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.