Learning to Schedule DAG Tasks
- URL: http://arxiv.org/abs/2103.03412v1
- Date: Fri, 5 Mar 2021 01:10:24 GMT
- Title: Learning to Schedule DAG Tasks
- Authors: Zhigang Hua, Feng Qi, Gan Liu and Shuang Yang
- Abstract summary: We present a novel learning-based approach to scheduling directed acyclic graphs (DAGs)
The algorithm employs a reinforcement learning agent to iteratively add edges directed to the DAG.
Our approach can be easily applied to any existing scheduling algorithms.
- Score: 7.577417675452624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scheduling computational tasks represented by directed acyclic graphs (DAGs)
is challenging because of its complexity. Conventional scheduling algorithms
rely heavily on simple heuristics such as shortest job first (SJF) and critical
path (CP), and are often lacking in scheduling quality. In this paper, we
present a novel learning-based approach to scheduling DAG tasks. The algorithm
employs a reinforcement learning agent to iteratively add directed edges to the
DAG, one at a time, to enforce ordering (i.e., priorities of execution and
resource allocation) of "tricky" job nodes. By doing so, the original DAG
scheduling problem is dramatically reduced to a much simpler proxy problem, on
which heuristic scheduling algorithms such as SJF and CP can be efficiently
improved. Our approach can be easily applied to any existing heuristic
scheduling algorithms. On the benchmark dataset of TPC-H, we show that our
learning based approach can significantly improve over popular heuristic
algorithms and consistently achieves the best performance among several methods
under a variety of settings.
Related papers
- A Schedule of Duties in the Cloud Space Using a Modified Salp Swarm
Algorithm [0.0]
One of the most important NP-hard issues in the cloud domain is scheduling.
One of the collective intelligence algorithms, called the Salp Swarm Algorithm (SSA), has been expanded, improved, and applied.
Results show that our algorithm has generally higher performance than the other algorithms.
arXiv Detail & Related papers (2023-09-18T02:48:41Z) - Edge Generation Scheduling for DAG Tasks Using Deep Reinforcement
Learning [2.365237699556817]
Directed acyclic graph (DAG) tasks are currently adopted in the real-time domain to model complex applications.
We propose a new DAG scheduling framework that attempts to minimize the DAG width by iteratively generating edges.
We evaluate the effectiveness of the proposed algorithm by comparing it with state-of-the-art DAG schedulings and an optimal mixed-integer linear programming baseline.
arXiv Detail & Related papers (2023-08-28T15:19:18Z) - An End-to-End Reinforcement Learning Approach for Job-Shop Scheduling
Problems Based on Constraint Programming [5.070542698701157]
This paper proposes a novel end-to-end approach to solving scheduling problems by means of CP and Reinforcement Learning (RL)
Our approach leverages existing CP solvers to train an agent learning a Priority Dispatching Rule (PDR) that generalizes well to large instances, even from separate datasets.
arXiv Detail & Related papers (2023-06-09T08:24:56Z) - Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - GCNScheduler: Scheduling Distributed Computing Applications using Graph
Convolutional Networks [12.284934135116515]
We propose a graph convolutional network-based scheduler (GCNScheduler)
By carefully integrating an inter-task data dependency structure with network settings into an input graph, the GCNScheduler can efficiently schedule tasks for a given objective.
We show that it better makespan than the classic HEFT algorithm, and almost the same throughput as throughput-oriented HEFT.
arXiv Detail & Related papers (2021-10-22T01:54:10Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - A Two-stage Framework and Reinforcement Learning-based Optimization
Algorithms for Complex Scheduling Problems [54.61091936472494]
We develop a two-stage framework, in which reinforcement learning (RL) and traditional operations research (OR) algorithms are combined together.
The scheduling problem is solved in two stages, including a finite Markov decision process (MDP) and a mixed-integer programming process, respectively.
Results show that the proposed algorithms could stably and efficiently obtain satisfactory scheduling schemes for agile Earth observation satellite scheduling problems.
arXiv Detail & Related papers (2021-03-10T03:16:12Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Geometric Deep Reinforcement Learning for Dynamic DAG Scheduling [8.14784681248878]
In this paper, we propose a reinforcement learning approach to solve a realistic scheduling problem.
We apply it to an algorithm commonly executed in the high performance computing community, the Cholesky factorization.
Our algorithm uses graph neural networks in combination with an actor-critic algorithm (A2C) to build an adaptive representation of the problem on the fly.
arXiv Detail & Related papers (2020-11-09T10:57:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.