Edge Generation Scheduling for DAG Tasks Using Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2308.14647v2
- Date: Thu, 11 Jan 2024 00:20:15 GMT
- Title: Edge Generation Scheduling for DAG Tasks Using Deep Reinforcement
Learning
- Authors: Binqi Sun, Mirco Theile, Ziyuan Qin, Daniele Bernardini, Debayan Roy,
Andrea Bastoni, and Marco Caccamo
- Abstract summary: Directed acyclic graph (DAG) tasks are currently adopted in the real-time domain to model complex applications.
We propose a new DAG scheduling framework that attempts to minimize the DAG width by iteratively generating edges.
We evaluate the effectiveness of the proposed algorithm by comparing it with state-of-the-art DAG schedulings and an optimal mixed-integer linear programming baseline.
- Score: 2.365237699556817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Directed acyclic graph (DAG) tasks are currently adopted in the real-time
domain to model complex applications from the automotive, avionics, and
industrial domains that implement their functionalities through chains of
intercommunicating tasks. This paper studies the problem of scheduling
real-time DAG tasks by presenting a novel schedulability test based on the
concept of trivial schedulability. Using this schedulability test, we propose a
new DAG scheduling framework (edge generation scheduling -- EGS) that attempts
to minimize the DAG width by iteratively generating edges while guaranteeing
the deadline constraint. We study how to efficiently solve the problem of
generating edges by developing a deep reinforcement learning algorithm combined
with a graph representation neural network to learn an efficient edge
generation policy for EGS. We evaluate the effectiveness of the proposed
algorithm by comparing it with state-of-the-art DAG scheduling heuristics and
an optimal mixed-integer linear programming baseline. Experimental results show
that the proposed algorithm outperforms the state-of-the-art by requiring fewer
processors to schedule the same DAG tasks. The code is available at
https://github.com/binqi-sun/egs.
Related papers
- $ψ$DAG: Projected Stochastic Approximation Iteration for DAG Structure Learning [6.612096312467342]
Learning the structure of Directed A Graphs (DAGs) presents a significant challenge due to the vast search space of possible graphs, which scales with the number of nodes.
Recent advancements have redefined this problem as a continuous optimization task by incorporating differentiable a exponentiallyity constraints.
We present a novel framework for learning DAGs, employing a Approximation approach integrated with Gradient Descent (SGD)-based optimization techniques.
arXiv Detail & Related papers (2024-10-31T12:13:11Z) - TS-EoH: An Edge Server Task Scheduling Algorithm Based on Evolution of Heuristic [0.6827423171182154]
This paper introduces a novel task-scheduling approach based on EC theory and Evolutionary algorithms.
Experimental results show that our task-scheduling algorithm outperforms existing and traditional reinforcement learning methods.
arXiv Detail & Related papers (2024-09-04T10:00:32Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for
DAG Task Scheduling over Dynamic Vehicular Clouds [35.418964557667096]
We propose a graph neural network-augmented deep reinforcement learning scheme (GA-DRL) for scheduling DAG tasks over dynamic VCs.
GA-DRL outperforms existing benchmarks in terms of DAG task completion time.
arXiv Detail & Related papers (2023-07-03T06:41:15Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - A Scalable Deep Reinforcement Learning Model for Online Scheduling
Coflows of Multi-Stage Jobs for High Performance Computing [9.866286878494979]
In multi-stage jobs, each job consists of multiple coflows and is represented by a Directed Acyclic Graph (DAG)
In this paper, we propose a novel Pipelined-DAGNN to process the input and propose a novel coflow scheduling algorithm.
arXiv Detail & Related papers (2021-12-21T09:36:55Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - Learning to Schedule DAG Tasks [7.577417675452624]
We present a novel learning-based approach to scheduling directed acyclic graphs (DAGs)
The algorithm employs a reinforcement learning agent to iteratively add edges directed to the DAG.
Our approach can be easily applied to any existing scheduling algorithms.
arXiv Detail & Related papers (2021-03-05T01:10:24Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Efficient and Stable Graph Scattering Transforms via Pruning [86.76336979318681]
Graph scattering transforms ( GSTs) offer training-free deep GCN models that extract features from graph data.
The price paid by GSTs is exponential complexity in space and time that increases with the number of layers.
The present work addresses the complexity limitation of GSTs by introducing an efficient so-termed pruned (p) GST approach.
arXiv Detail & Related papers (2020-01-27T16:05:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.