Learning Augmented Energy Minimization via Speed Scaling
- URL: http://arxiv.org/abs/2010.11629v1
- Date: Thu, 22 Oct 2020 11:58:01 GMT
- Title: Learning Augmented Energy Minimization via Speed Scaling
- Authors: \'Etienne Bamas, Andreas Maggiori, Lars Rohwedder, Ola Svensson
- Abstract summary: We study a variant of the classic online speed scaling problem, in which machine learning predictions about the future can be integrated naturally.
Inspired by recent work on learning-augmented online algorithms, we propose an algorithm which incorporates predictions in a black-box manner and outperforms any online algorithm if the accuracy is high.
- Score: 11.47280189685449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As power management has become a primary concern in modern data centers,
computing resources are being scaled dynamically to minimize energy
consumption. We initiate the study of a variant of the classic online speed
scaling problem, in which machine learning predictions about the future can be
integrated naturally. Inspired by recent work on learning-augmented online
algorithms, we propose an algorithm which incorporates predictions in a
black-box manner and outperforms any online algorithm if the accuracy is high,
yet maintains provable guarantees if the prediction is very inaccurate. We
provide both theoretical and experimental evidence to support our claims.
Related papers
- Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - EANet: Expert Attention Network for Online Trajectory Prediction [5.600280639034753]
Expert Attention Network is a complete online learning framework for trajectory prediction.
We introduce expert attention, which adjusts the weights of different depths of network layers, avoiding the model updated slowly due to gradient problem.
Furthermore, we propose a short-term motion trend kernel function which is sensitive to scenario change, allowing the model to respond quickly.
arXiv Detail & Related papers (2023-09-11T07:09:40Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - A Novel Prediction Setup for Online Speed-Scaling [3.3440413258080577]
It is fundamental to incorporate energy considerations when designing (scheduling) algorithms.
This paper attempts to obtain the best of both worlds for the classical, deadline based, online speed-scaling problem.
arXiv Detail & Related papers (2021-12-06T14:46:20Z) - Learning-Augmented Dynamic Power Management with Multiple States via New
Ski Rental Bounds [38.80523710193321]
We study the online problem of minimizing power consumption in systems with multiple power-saving states.
An algorithm has to choose between power-saving states of different energy consumption and wake-up costs.
We develop a learning-augmented online algorithm that makes decisions based on (potentially inaccurate) predicted lengths of the idle periods.
arXiv Detail & Related papers (2021-10-25T17:14:20Z) - Online Capacity Scaling Augmented With Unreliable Machine Learning
Predictions [0.0]
We propose a novel algorithm, called Adaptive Balanced Capacity Scaling (ABCS), that has access to black-box machine learning predictions.
ABCS aims to adapt to the predictions and is also robust against unpredictable surges in the workload.
In particular, we prove that ABCS is $(varepsilon)$-competitive if the predictions are accurate, and yet, it has a uniformly bounded competitive ratio even if the predictions are completely inaccurate.
arXiv Detail & Related papers (2021-01-28T18:14:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.