Learning-Augmented Dynamic Power Management with Multiple States via New
Ski Rental Bounds
- URL: http://arxiv.org/abs/2110.13116v1
- Date: Mon, 25 Oct 2021 17:14:20 GMT
- Title: Learning-Augmented Dynamic Power Management with Multiple States via New
Ski Rental Bounds
- Authors: Antonios Antoniadis, Christian Coester, Marek Eli\'a\v{s}, Adam Polak,
Bertrand Simon
- Abstract summary: We study the online problem of minimizing power consumption in systems with multiple power-saving states.
An algorithm has to choose between power-saving states of different energy consumption and wake-up costs.
We develop a learning-augmented online algorithm that makes decisions based on (potentially inaccurate) predicted lengths of the idle periods.
- Score: 38.80523710193321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the online problem of minimizing power consumption in systems with
multiple power-saving states. During idle periods of unknown lengths, an
algorithm has to choose between power-saving states of different energy
consumption and wake-up costs. We develop a learning-augmented online algorithm
that makes decisions based on (potentially inaccurate) predicted lengths of the
idle periods. The algorithm's performance is near-optimal when predictions are
accurate and degrades gracefully with increasing prediction error, with a
worst-case guarantee almost identical to the optimal classical online algorithm
for the problem. A key ingredient in our approach is a new algorithm for the
online ski rental problem in the learning augmented setting with tight
dependence on the prediction error. We support our theoretical findings with
experiments.
Related papers
- Improving Online Algorithms via ML Predictions [19.03466073202238]
We consider two classical problems, ski rental and non-clairvoyant job scheduling, and obtain new online algorithms that use predictions to make their decisions.
These algorithms are oblivious to the performance of the predictor, improve with better predictions, but do not degrade much if the predictions are poor.
arXiv Detail & Related papers (2024-07-25T02:17:53Z) - Improved Learning-Augmented Algorithms for the Multi-Option Ski Rental
Problem via Best-Possible Competitive Analysis [0.1529342790344802]
We present improved learning-augmented algorithms for the multi-option ski rental problem.
Our algorithm is based on a new, provably best-possible randomized competitive algorithm for the problem.
arXiv Detail & Related papers (2023-02-14T05:05:03Z) - Learning to Control under Time-Varying Environment [18.48729114775298]
This paper investigates the problem of regret in linear time-varying (LTV) dynamical systems.
We propose the first computationally tractable online algorithm with regret guarantees.
arXiv Detail & Related papers (2022-06-06T11:40:46Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Online estimation and control with optimal pathlength regret [52.28457815067461]
A natural goal when designing online learning algorithms is to bound the regret of the algorithm in terms of the temporal variation of the input sequence.
Data-dependent "pathlength" regret bounds have recently been obtained for a wide variety of online learning problems, including OCO and bandits.
arXiv Detail & Related papers (2021-10-24T22:43:15Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - The Primal-Dual method for Learning Augmented Algorithms [10.2730668356857]
We extend the primal-dual method for online algorithms to incorporate predictions that advise the online algorithm about the next action to take.
We show that our algorithms outperform any online algorithm when the prediction is accurate while maintaining good guarantees when the prediction is misleading.
arXiv Detail & Related papers (2020-10-22T11:58:47Z) - Learning Augmented Energy Minimization via Speed Scaling [11.47280189685449]
We study a variant of the classic online speed scaling problem, in which machine learning predictions about the future can be integrated naturally.
Inspired by recent work on learning-augmented online algorithms, we propose an algorithm which incorporates predictions in a black-box manner and outperforms any online algorithm if the accuracy is high.
arXiv Detail & Related papers (2020-10-22T11:58:01Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.