CarbonClipper: Optimal Algorithms for Carbon-Aware Spatiotemporal Workload Management
- URL: http://arxiv.org/abs/2408.07831v1
- Date: Wed, 14 Aug 2024 22:08:06 GMT
- Title: CarbonClipper: Optimal Algorithms for Carbon-Aware Spatiotemporal Workload Management
- Authors: Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, Prashant Shenoy,
- Abstract summary: carbon-aware workload management seeks to address the growing environmental impact of data centers.
mathsfSOAD$ formalizes the open problem of combining general metrics and deadline constraints in the online algorithms.
rm Cscriptsize ARCscriptsize LIPPER$ is a learning-augmented algorithm that takes advantage predictions.
- Score: 11.029788598491077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study carbon-aware spatiotemporal workload management, which seeks to address the growing environmental impact of data centers. We formalize this as an online problem called spatiotemporal online allocation with deadline constraints ($\mathsf{SOAD}$), in which an online player completes a workload (e.g., a batch compute job) by moving and scheduling the workload across a network subject to a deadline $T$. At each time step, a service cost function is revealed, representing, e.g., the carbon intensity of servicing a workload at each location, and the player must irrevocably decide the current allocation. Furthermore, whenever the player moves the allocation, it incurs a movement cost defined by a metric space $(X,d)$ that captures, e.g., the overhead of migrating a compute job. $\mathsf{SOAD}$ formalizes the open problem of combining general metrics and deadline constraints in the online algorithms literature, unifying problems such as metrical task systems and online search. We propose a competitive algorithm for $\mathsf{SOAD}$ along with a matching lower bound that proves it is optimal. Our main algorithm, ${\rm C{\scriptsize ARBON}C{\scriptsize LIPPER}}$, is a learning-augmented algorithm that takes advantage of predictions (e.g., carbon intensity forecasts) and achieves an optimal consistency-robustness trade-off. We evaluate our proposed algorithms for carbon-aware spatiotemporal workload management on a simulated global data center network, showing that ${\rm C{\scriptsize ARBON}C{\scriptsize LIPPER}}$ significantly improves performance compared to baseline methods and delivers meaningful carbon reductions.
Related papers
- LACS: Learning-Augmented Algorithms for Carbon-Aware Resource Scaling with Uncertain Demand [1.423958951481749]
This paper studies the online carbon-aware resource scaling problem with unknown job lengths (OCSU)
We propose LACS, a theoretically robust learning-augmented algorithm that solves OCSU.
LACS achieves a 32% reduction in carbon footprint compared to the deadline-aware carbon-agnostic execution of the job.
arXiv Detail & Related papers (2024-03-29T04:54:22Z) - A Schedule of Duties in the Cloud Space Using a Modified Salp Swarm
Algorithm [0.0]
One of the most important NP-hard issues in the cloud domain is scheduling.
One of the collective intelligence algorithms, called the Salp Swarm Algorithm (SSA), has been expanded, improved, and applied.
Results show that our algorithm has generally higher performance than the other algorithms.
arXiv Detail & Related papers (2023-09-18T02:48:41Z) - Efficient Methods for Non-stationary Online Learning [67.3300478545554]
We present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $mathcalO(log T)$ to $1$.
Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods.
arXiv Detail & Related papers (2023-09-16T07:30:12Z) - Movement Penalized Bayesian Optimization with Application to Wind Energy
Systems [84.7485307269572]
Contextual Bayesian optimization (CBO) is a powerful framework for sequential decision-making given side information.
In this setting, the learner receives context (e.g., weather conditions) at each round, and has to choose an action (e.g., turbine parameters)
Standard algorithms assume no cost for switching their decisions at every round, but in many practical applications, there is a cost associated with such changes, which should be minimized.
arXiv Detail & Related papers (2022-10-14T20:19:32Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Breaking the Linear Iteration Cost Barrier for Some Well-known
Conditional Gradient Methods Using MaxIP Data-structures [49.73889315176884]
Conditional gradient methods (CGM) are widely used in modern machine learning.
Most efforts focus on reducing the number of iterations as a means to reduce the overall running time.
We show the first algorithm, where the cost per iteration is sublinear in the number of parameters, for many fundamental optimization algorithms.
arXiv Detail & Related papers (2021-11-30T05:40:14Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Online Apprenticeship Learning [58.45089581278177]
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function.
The goal is to find a policy that matches the expert's performance on some predefined set of cost functions.
We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms.
arXiv Detail & Related papers (2021-02-13T12:57:51Z) - Metrical Task Systems with Online Machine Learned Advice [0.0]
We show that augmenting online algorithms with a machine learned predictor can provably decrease the competitive ratio under as long as the predictor is suitably accurate.
We focus on the specific class of uniform task systems on $n$ tasks, for which the best deterministic algorithm is $O(n)$ competitive and the best randomized algorithm is $O(log n)$ competitive.
arXiv Detail & Related papers (2020-12-17T04:56:51Z) - APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD
Algorithm [39.110478306078974]
Adam is the important optimization algorithm to guarantee efficiency and accuracy for training many important tasks such as BERT and ImageNet.
We propose a communication efficient bf ADAM bf preconditioned bf Momentum SGD algorithm-- named APMSqueeze-- through an error compensated method compressing gradients.
arXiv Detail & Related papers (2020-08-26T02:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.