Online Task Scheduling for Fog Computing with Multi-Resource Fairness
- URL: http://arxiv.org/abs/2008.00207v1
- Date: Sat, 1 Aug 2020 07:57:40 GMT
- Title: Online Task Scheduling for Fog Computing with Multi-Resource Fairness
- Authors: Simeng Bian, Xi Huang, Ziyu Shao
- Abstract summary: In fog computing systems, one key challenge is online task scheduling, i.e., to decide the resource allocation for tasks that are continuously generated from end devices.
We propose FairTS, an online task scheduling scheme that learns directly from experience to effectively shorten average task slowdown.
Simulation results show that FairTS outperforms state-of-the-art schemes with an ultra-low task slowdown and better resource fairness.
- Score: 9.959176097194675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In fog computing systems, one key challenge is online task scheduling, i.e.,
to decide the resource allocation for tasks that are continuously generated
from end devices. The design is challenging because of various uncertainties
manifested in fog computing systems; e.g., tasks' resource demands remain
unknown before their actual arrivals. Recent works have applied deep
reinforcement learning (DRL) techniques to conduct online task scheduling and
improve various objectives. However, they overlook the multi-resource fairness
for different tasks, which is key to achieving fair resource sharing among
tasks but in general non-trivial to achieve. Thusly, it is still an open
problem to design an online task scheduling scheme with multi-resource
fairness. In this paper, we address the above challenges. Particularly, by
leveraging DRL techniques and adopting the idea of dominant resource fairness
(DRF), we propose FairTS, an online task scheduling scheme that learns directly
from experience to effectively shorten average task slowdown while ensuring
multi-resource fairness among tasks. Simulation results show that FairTS
outperforms state-of-the-art schemes with an ultra-low task slowdown and better
resource fairness.
Related papers
- Skills Regularized Task Decomposition for Multi-task Offline Reinforcement Learning [11.790581500542439]
Reinforcement learning (RL) with diverse offline datasets can have the advantage of leveraging the relation of multiple tasks.
We present a skill-based multi-task RL technique on heterogeneous datasets that are generated by behavior policies of different quality.
We show that our multi-task offline RL approach is robust to the mixed configurations of different-quality datasets.
arXiv Detail & Related papers (2024-08-28T07:36:20Z) - Resource Allocation and Workload Scheduling for Large-Scale Distributed Deep Learning: A Survey [48.06362354403557]
This survey reviews the literature, mainly from 2019 to 2024, on efficient resource allocation and workload scheduling strategies for large-scale distributed DL.
We highlight critical challenges for each topic and discuss key insights of existing technologies.
This survey aims to encourage computer science, artificial intelligence, and communications researchers to understand recent advances.
arXiv Detail & Related papers (2024-06-12T11:51:44Z) - Learning to Schedule Online Tasks with Bandit Feedback [7.671139712158846]
Online task scheduling serves an integral role for task-intensive applications in cloud computing and crowdsourcing.
We propose a double-optimistic learning based Robbins-Monro (DOL-RM) algorithm.
DOL-RM integrates a learning module that incorporates optimistic estimation for reward-to-cost ratio and a decision module.
arXiv Detail & Related papers (2024-02-26T10:11:28Z) - RHFedMTL: Resource-Aware Hierarchical Federated Multi-Task Learning [11.329273673732217]
Federated learning is an effective way to enable AI over massive distributed nodes with security.
It is challenging to ensure the privacy while maintain a coupled multi-task learning across multiple base stations (BSs) and terminals.
In this paper, inspired by the natural cloud-BS-terminal hierarchy of cellular works, we provide a viable resource-aware hierarchical federated MTL (RHFedMTL) solution.
arXiv Detail & Related papers (2023-06-01T13:49:55Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - Understanding the Complexity Gains of Single-Task RL with a Curriculum [83.46923851724408]
Reinforcement learning (RL) problems can be challenging without well-shaped rewards.
We provide a theoretical framework that reformulates a single-task RL problem as a multi-task RL problem defined by a curriculum.
We show that sequentially solving each task in the multi-task RL problem is more computationally efficient than solving the original single-task problem.
arXiv Detail & Related papers (2022-12-24T19:46:47Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - An Evolutionary Algorithm for Task Scheduling in Crowdsourced Software
Development [10.373891804761376]
This paper proposes an evolutionary algorithm-based task scheduling method for crowdsourced software development.
Experimental results on 4 projects demonstrate that the proposed method has the potential to reduce project duration by a factor of 33-78%.
arXiv Detail & Related papers (2021-07-05T18:07:26Z) - Smart Scheduling based on Deep Reinforcement Learning for Cellular
Networks [18.04856086228028]
We propose a smart scheduling scheme based on deep reinforcement learning (DRL)
We provide implementation-friend designs, i.e., a scalable neural network design for the agent and a virtual environment training framework.
We show that the DRL-based smart scheduling outperforms the conventional scheduling method and can be adopted in practical systems.
arXiv Detail & Related papers (2021-03-22T02:09:16Z) - Gradient Surgery for Multi-Task Learning [119.675492088251]
Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks.
The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood.
We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient.
arXiv Detail & Related papers (2020-01-19T06:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.