Performance and Energy-Aware Bi-objective Tasks Scheduling for Cloud
Data Centers
- URL: http://arxiv.org/abs/2105.00843v1
- Date: Sun, 25 Apr 2021 08:55:57 GMT
- Title: Performance and Energy-Aware Bi-objective Tasks Scheduling for Cloud
Data Centers
- Authors: Huned Materwala and Leila Ismail
- Abstract summary: Cloud computing enables remote execution of users tasks.
The pervasive adoption of cloud computing in smart cities services and applications requires timely execution of tasks adhering to Quality of Services (QoS)
The increasing use of computing servers exacerbates the issues of high energy consumption, operating costs, and environmental pollution.
We propose a performance and energy optimization bi-objective algorithm to tradeoff the contradicting performance and energy objectives.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cloud computing enables remote execution of users tasks. The pervasive
adoption of cloud computing in smart cities services and applications requires
timely execution of tasks adhering to Quality of Services (QoS).
However, the increasing use of computing servers exacerbates the issues of
high energy consumption, operating costs, and environmental pollution.
Maximizing the performance and minimizing the energy in a cloud data center is
challenging. In this paper, we propose a performance and energy optimization
bi-objective algorithm to tradeoff the contradicting performance and energy
objectives. An evolutionary algorithm-based multi-objective optimization is for
the first time proposed using system performance counters. The performance of
the proposed model is evaluated using a realistic cloud dataset in a cloud
computing environment. Our experimental results achieve higher performance and
lower energy consumption compared to a state of the art algorithm.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Heuristics and Metaheuristics for Dynamic Management of Computing and
Cooling Energy in Cloud Data Centers [0.0]
We propose novel power and thermal-aware strategies and models to provide joint cooling and computing optimizations.
Our results show that the combined awareness from both metaheuristic and best fit decreasing algorithms allow us to describe the global energy into faster and lighter optimization strategies.
This approach allows us to improve the energy efficiency of the data center, considering both computing and cooling infrastructures, in up to a 21.74% while maintaining quality of service.
arXiv Detail & Related papers (2023-12-17T09:40:36Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - A Reinforcement Learning Approach for Performance-aware Reduction in
Power Consumption of Data Center Compute Nodes [0.46040036610482665]
We use Reinforcement Learning to design a power capping policy on cloud compute nodes.
We show how a trained agent running on actual hardware can take actions by balancing power consumption and application performance.
arXiv Detail & Related papers (2023-08-15T23:25:52Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - PECCO: A Profit and Cost-oriented Computation Offloading Scheme in
Edge-Cloud Environment with Improved Moth-flame Optimisation [22.673319784715172]
Edge-cloud computation offloading is a promising solution to relieve the burden on cloud centres.
We propose an improved Moth-flame optimiser PECCO-MFI which addresses some deficiencies of the original Moth-flame Optimiser.
arXiv Detail & Related papers (2022-08-09T23:26:42Z) - Measuring the Carbon Intensity of AI in Cloud Instances [91.28501520271972]
We provide a framework for measuring software carbon intensity, and propose to measure operational carbon emissions.
We evaluate a suite of approaches for reducing emissions on the Microsoft Azure cloud compute platform.
arXiv Detail & Related papers (2022-06-10T17:04:04Z) - Coverage and Capacity Optimization in STAR-RISs Assisted Networks: A
Machine Learning Approach [102.00221938474344]
A novel model is proposed for the coverage and capacity optimization of simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) assisted networks.
A loss function-based update strategy is the core point, which is able to calculate weights for both loss functions of coverage and capacity by a min-norm solver at each update.
The numerical results demonstrate that the investigated update strategy outperforms the fixed weight-based MO algorithms.
arXiv Detail & Related papers (2022-04-13T13:52:22Z) - HUNTER: AI based Holistic Resource Management for Sustainable Cloud
Computing [26.48962351761643]
We propose an artificial intelligence (AI) based holistic resource management technique for sustainable cloud computing called HUNTER.
The proposed model formulates the goal of optimizing energy efficiency in data centers as a multi-objective scheduling problem.
Experiments on simulated and physical cloud environments show that HUNTER outperforms state-of-the-art baselines in terms of energy consumption, SLA violation, scheduling time, cost and temperature by up to 12, 35, 43, 54 and 3 percent respectively.
arXiv Detail & Related papers (2021-10-11T18:11:26Z) - Federated Learning for Task and Resource Allocation in Wireless High
Altitude Balloon Networks [160.96150373385768]
The problem of minimizing energy and time consumption for task computation and transmission is studied in a mobile edge computing (MEC)-enabled balloon network.
A support vector machine (SVM)-based federated learning (FL) algorithm is proposed to determine the user association proactively.
The proposed SVM-based FL method enables each HAB to cooperatively build an SVM model that can determine all user associations.
arXiv Detail & Related papers (2020-03-19T14:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.