A Predictive Autoscaler for Elastic Batch Jobs
- URL: http://arxiv.org/abs/2010.05049v1
- Date: Sat, 10 Oct 2020 17:35:55 GMT
- Title: A Predictive Autoscaler for Elastic Batch Jobs
- Authors: Peng Gao
- Abstract summary: Large batch jobs such as Deep Learning, HPC and Spark require far more computational resources and higher cost than conventional online service.
We propose a predictive autoscaler to provide an elastic interface for the customers and overprovision instances.
- Score: 8.354712625979776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large batch jobs such as Deep Learning, HPC and Spark require far more
computational resources and higher cost than conventional online service. Like
the processing of other time series data, these jobs possess a variety of
characteristics such as trend, burst, and seasonality. Cloud providers offer
short-term instances to achieve scalability, stability, and cost-efficiency.
Given the time lag caused by joining into the cluster and initialization,
crowded workloads may lead to a violation in the scheduling system. Based on
the assumption that there are infinite resources and ideal placements available
for users to require in the cloud environment, we propose a predictive
autoscaler to provide an elastic interface for the customers and overprovision
instances based on the trained regression model. We contribute to a method to
embed heterogeneous resource requirements in continuous space into discrete
resource buckets and an autoscaler to do predictive expand plans on the time
series of resource bucket counts. Our experimental evaluation of the production
resources usage data validates the solution and the results show that the
predictive autoscaler relieves the burden of making scaling plans, avoids long
launching time at lower cost and outperforms other prediction methods with
fine-tuned settings.
Related papers
- Advancing Enterprise Spatio-Temporal Forecasting Applications: Data Mining Meets Instruction Tuning of Language Models For Multi-modal Time Series Analysis in Low-Resource Settings [0.0]
patio-temporal forecasting is crucial in transportation, logistics, and supply chain management.
We propose a dynamic, multi-modal approach that integrates the strengths of traditional forecasting methods and instruction tuning of small language models.
Our framework enables on-premises customization with reduced computational and memory demands, while maintaining inference speed and data privacy/security.
arXiv Detail & Related papers (2024-08-24T16:32:58Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - OptScaler: A Hybrid Proactive-Reactive Framework for Robust Autoscaling
in the Cloud [11.340252931723063]
Autoscaling is a vital mechanism in cloud computing that supports the autonomous adjustment of computing resources under dynamic workloads.
Existing proactive autoscaling methods anticipate the future workload and scale the resources in advance, whereas reactive methods rely on real-time system feedback.
This paper presents OptScaler, a hybrid autoscaling framework that integrates the power of both proactive and reactive methods for regulating CPU utilization.
arXiv Detail & Related papers (2023-10-26T04:38:48Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - A Meta Reinforcement Learning Approach for Predictive Autoscaling in the
Cloud [10.970391043991363]
We propose an end-to-end predictive meta model-based RL algorithm, aiming to optimally allocate resource to maintain a stable CPU utilization level.
Our algorithm not only ensures the predictability and accuracy of the scaling strategy, but also enables the scaling decisions to adapt to the changing workloads with high sample efficiency.
arXiv Detail & Related papers (2022-05-31T13:54:04Z) - Approaching sales forecasting using recurrent neural networks and
transformers [57.43518732385863]
We develop three alternatives to tackle the problem of forecasting the customer sales at day/store/item level using deep learning techniques.
Our empirical results show how good performance can be achieved by using a simple sequence to sequence architecture with minimal data preprocessing effort.
The proposed solution achieves a RMSLE of around 0.54, which is competitive with other more specific solutions to the problem proposed in the Kaggle competition.
arXiv Detail & Related papers (2022-04-16T12:03:52Z) - Predictive Auto-scaling with OpenStack Monasca [8.631793985356286]
We propose an architecture for auto-scaling cloud services based on the status in which the system is expected to evolve in the near future.
We prototyped our approach as an open-source component, which relies on, and extends, the monitoring capabilities offered by Monasca.
arXiv Detail & Related papers (2021-11-03T11:02:08Z) - Optimal Resource Allocation for Serverless Queries [8.59568779761598]
Prior work focused on predicting peak allocation while ignoring aggressive trade-offs between resource allocation and run-time.
We introduce a system for optimal resource allocation that can predict performance with aggressive trade-offs, for both new and past observed queries.
arXiv Detail & Related papers (2021-07-19T02:55:48Z) - Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep
Learning [61.29990368322931]
Pollux improves scheduling performance in deep learning (DL) clusters by adaptively co-optimizing inter-dependent factors.
Pollux reduces average job completion times by 37-50% relative to state-of-the-art DL schedulers.
arXiv Detail & Related papers (2020-08-27T16:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.