GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for
DAG Task Scheduling over Dynamic Vehicular Clouds
- URL: http://arxiv.org/abs/2307.00777v1
- Date: Mon, 3 Jul 2023 06:41:15 GMT
- Title: GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for
DAG Task Scheduling over Dynamic Vehicular Clouds
- Authors: Zhang Liu and Lianfen Huang and Zhibin Gao and Manman Luo and
Seyyedali Hosseinalipour and Huaiyu Dai
- Abstract summary: We propose a graph neural network-augmented deep reinforcement learning scheme (GA-DRL) for scheduling DAG tasks over dynamic VCs.
GA-DRL outperforms existing benchmarks in terms of DAG task completion time.
- Score: 35.418964557667096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vehicular clouds (VCs) are modern platforms for processing of
computation-intensive tasks over vehicles. Such tasks are often represented as
directed acyclic graphs (DAGs) consisting of interdependent vertices/subtasks
and directed edges. In this paper, we propose a graph neural network-augmented
deep reinforcement learning scheme (GA-DRL) for scheduling DAG tasks over
dynamic VCs. In doing so, we first model the VC-assisted DAG task scheduling as
a Markov decision process. We then adopt a multi-head graph attention network
(GAT) to extract the features of DAG subtasks. Our developed GAT enables a
two-way aggregation of the topological information in a DAG task by
simultaneously considering predecessors and successors of each subtask. We
further introduce non-uniform DAG neighborhood sampling through codifying the
scheduling priority of different subtasks, which makes our developed GAT
generalizable to completely unseen DAG task topologies. Finally, we augment GAT
into a double deep Q-network learning module to conduct subtask-to-vehicle
assignment according to the extracted features of subtasks, while considering
the dynamics and heterogeneity of the vehicles in VCs. Through simulating
various DAG tasks under real-world movement traces of vehicles, we demonstrate
that GA-DRL outperforms existing benchmarks in terms of DAG task completion
time.
Related papers
- LayerDAG: A Layerwise Autoregressive Diffusion Model for Directed Acyclic Graph Generation [17.94316378710172]
This paper introduces LayerDAG, an autoregressive diffusion model, to generate realistic directed acyclic graphs (DAGs)
By interpreting the partial order of nodes as a sequence of bipartite graphs, LayerDAG decouples the strong node dependencies into manageable units that can be processed sequentially.
Experiments on both synthetic and real-world flow graphs from various computing platforms show that LayerDAG generates valid DAGs with superior statistical properties and benchmarking performance.
arXiv Detail & Related papers (2024-11-04T17:47:15Z) - Learning Topological Representations with Bidirectional Graph Attention Network for Solving Job Shop Scheduling Problem [27.904195034688257]
Existing learning-based methods for solving job shop scheduling problems (JSSP) usually use off-the-shelf GNN models tailored to undirected graphs and neglect the rich and meaningful topological structures of disjunctive graphs (DGs)
This paper proposes the topology-aware bidirectional graph attention network (TBGAT) to embed the DG for solving JSSP in a local search framework.
arXiv Detail & Related papers (2024-02-27T15:33:20Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - Edge Generation Scheduling for DAG Tasks Using Deep Reinforcement
Learning [2.365237699556817]
Directed acyclic graph (DAG) tasks are currently adopted in the real-time domain to model complex applications.
We propose a new DAG scheduling framework that attempts to minimize the DAG width by iteratively generating edges.
We evaluate the effectiveness of the proposed algorithm by comparing it with state-of-the-art DAG schedulings and an optimal mixed-integer linear programming baseline.
arXiv Detail & Related papers (2023-08-28T15:19:18Z) - Continual Object Detection via Prototypical Task Correlation Guided
Gating Mechanism [120.1998866178014]
We present a flexible framework for continual object detection via pRotOtypical taSk corrElaTion guided gaTingAnism (ROSETTA)
Concretely, a unified framework is shared by all tasks while task-aware gates are introduced to automatically select sub-models for specific tasks.
Experiments on COCO-VOC, KITTI-Kitchen, class-incremental detection on VOC and sequential learning of four tasks show that ROSETTA yields state-of-the-art performance.
arXiv Detail & Related papers (2022-05-06T07:31:28Z) - DG-Labeler and DGL-MOTS Dataset: Boost the Autonomous Driving Perception [15.988493804970092]
We introduce the DG-Labeler and DGL-MOTS dataset to facilitate the training data annotation for the MOTS task.
Results on extensive cross-dataset evaluations indicate significant performance improvements for several state-of-the-art methods trained on our dataset.
arXiv Detail & Related papers (2021-10-15T01:04:31Z) - Multi-task Over-the-Air Federated Learning: A Non-Orthogonal
Transmission Approach [52.85647632037537]
We propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES)
Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.
arXiv Detail & Related papers (2021-06-27T13:09:32Z) - Gradient Coding with Dynamic Clustering for Straggler-Tolerant
Distributed Learning [55.052517095437]
gradient descent (GD) is widely employed to parallelize the learning task by distributing the dataset across multiple workers.
A significant performance bottleneck for the per-iteration completion time in distributed synchronous GD is $straggling$ workers.
Coded distributed techniques have been introduced recently to mitigate stragglers and to speed up GD iterations by assigning redundant computations to workers.
We propose a novel dynamic GC scheme, which assigns redundant data to workers to acquire the flexibility to choose from among a set of possible codes depending on the past straggling behavior.
arXiv Detail & Related papers (2021-03-01T18:51:29Z) - A Feedback Scheme to Reorder a Multi-Agent Execution Schedule by
Persistently Optimizing a Switchable Action Dependency Graph [65.70656676650391]
We consider multiple Automated Guided Vehicles (AGVs) navigating a common workspace to fulfill various intralogistics tasks.
One approach is to construct an Action Dependency Graph (ADG) which encodes the ordering of AGVs as they proceed along their routes.
If the workspace is shared by dynamic obstacles such as humans or third party robots, AGVs can experience large delays.
We present an online method to repeatedly modify acyclic ADG to minimize route completion times of each AGV.
arXiv Detail & Related papers (2020-10-11T14:39:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.