OptScaler: A Hybrid Proactive-Reactive Framework for Robust Autoscaling
in the Cloud
- URL: http://arxiv.org/abs/2311.12864v1
- Date: Thu, 26 Oct 2023 04:38:48 GMT
- Title: OptScaler: A Hybrid Proactive-Reactive Framework for Robust Autoscaling
in the Cloud
- Authors: Ding Zou, Wei Lu, Zhibo Zhu, Xingyu Lu, Jun Zhou, Xiaojin Wang, Kangyu
Liu, Haiqing Wang, Kefan Wang, Renen Sun
- Abstract summary: Autoscaling is a vital mechanism in cloud computing that supports the autonomous adjustment of computing resources under dynamic workloads.
Existing proactive autoscaling methods anticipate the future workload and scale the resources in advance, whereas reactive methods rely on real-time system feedback.
This paper presents OptScaler, a hybrid autoscaling framework that integrates the power of both proactive and reactive methods for regulating CPU utilization.
- Score: 11.340252931723063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autoscaling is a vital mechanism in cloud computing that supports the
autonomous adjustment of computing resources under dynamic workloads. A primary
goal of autoscaling is to stabilize resource utilization at a desirable level,
thus reconciling the need for resource-saving with the satisfaction of Service
Level Objectives (SLOs). Existing proactive autoscaling methods anticipate the
future workload and scale the resources in advance, whereas the reliability may
suffer from prediction deviations arising from the frequent fluctuations and
noise of cloud workloads; reactive methods rely on real-time system feedback,
while the hysteretic nature of reactive methods could cause violations of the
rigorous SLOs. To this end, this paper presents OptScaler, a hybrid autoscaling
framework that integrates the power of both proactive and reactive methods for
regulating CPU utilization. Specifically, the proactive module of OptScaler
consists of a sophisticated workload prediction model and an optimization
model, where the former provides reliable inputs to the latter for making
optimal scaling decisions. The reactive module provides a self-tuning estimator
of CPU utilization to the optimization model. We embed Model Predictive Control
(MPC) mechanism and robust optimization techniques into the optimization model
to further enhance its reliability. Numerical results have demonstrated the
superiority of both the workload prediction model and the hybrid framework of
OptScaler in the scenario of online services compared to prevalent reactive,
proactive, or hybrid autoscalers. OptScaler has been successfully deployed at
Alipay, supporting the autoscaling of applets in the world-leading payment
platform.
Related papers
- Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Deep autoregressive density nets vs neural ensembles for model-based
offline reinforcement learning [2.9158689853305693]
We consider a model-based reinforcement learning algorithm that infers the system dynamics from the available data and performs policy optimization on imaginary model rollouts.
This approach is vulnerable to exploiting model errors which can lead to catastrophic failures on the real system.
We show that better performance can be obtained with a single well-calibrated autoregressive model on the D4RL benchmark.
arXiv Detail & Related papers (2024-02-05T10:18:15Z) - MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot
Learning [52.101643259906915]
We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations.
Existing model-based offline RL methods are not suitable for offline-to-online fine-tuning in high-dimensional domains.
We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization.
arXiv Detail & Related papers (2024-01-06T21:04:31Z) - A Deep Recurrent-Reinforcement Learning Method for Intelligent AutoScaling of Serverless Functions [18.36339203254509]
F introduces a lightweight, function-based cloud execution model that finds its relevance in a range of applications like IoT-edge data processing and anomaly detection.
arXiv Detail & Related papers (2023-08-11T04:41:19Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - A Meta Reinforcement Learning Approach for Predictive Autoscaling in the
Cloud [10.970391043991363]
We propose an end-to-end predictive meta model-based RL algorithm, aiming to optimally allocate resource to maintain a stable CPU utilization level.
Our algorithm not only ensures the predictability and accuracy of the scaling strategy, but also enables the scaling decisions to adapt to the changing workloads with high sample efficiency.
arXiv Detail & Related papers (2022-05-31T13:54:04Z) - A Reinforcement Learning-based Economic Model Predictive Control
Framework for Autonomous Operation of Chemical Reactors [0.5735035463793008]
This work presents a novel framework for integrating EMPC and RL for online model parameter estimation of a class of nonlinear systems.
The major advantage of this framework is its simplicity; state-of-the-art RL algorithms and EMPC schemes can be employed with minimal modifications.
arXiv Detail & Related papers (2021-05-06T13:34:30Z) - A Predictive Autoscaler for Elastic Batch Jobs [8.354712625979776]
Large batch jobs such as Deep Learning, HPC and Spark require far more computational resources and higher cost than conventional online service.
We propose a predictive autoscaler to provide an elastic interface for the customers and overprovision instances.
arXiv Detail & Related papers (2020-10-10T17:35:55Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.