DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC)
Resource Scheduling
- URL: http://arxiv.org/abs/2005.07666v2
- Date: Fri, 5 Jun 2020 09:53:01 GMT
- Title: DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC)
Resource Scheduling
- Authors: Tegg Taekyong Sung, Jeongsoo Ha, Jeewoo Kim, Alex Yahja, Chae-Bong
Sohn, Bo Ryu
- Abstract summary: We present a novel scheduling solution for a class of System-on-Chip (SoC) systems.
Our Deep Reinforcement Learning (DRL)-based Scheduler (DeepSoCS) overcomes the brittleness of rule-based schedulers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we~present a novel scheduling solution for a class of
System-on-Chip (SoC) systems where heterogeneous chip resources (DSP, FPGA,
GPU, etc.) must be efficiently scheduled for continuously arriving hierarchical
jobs with their tasks represented by a directed acyclic graph. Traditionally,
heuristic algorithms have been widely used for many resource scheduling
domains, and Heterogeneous Earliest Finish Time (HEFT) has been a dominating
state-of-the-art technique across a broad range of heterogeneous resource
scheduling domains over many years. Despite their long-standing popularity,
HEFT-like algorithms are known to be vulnerable to a small amount of noise
added to the environment. Our Deep Reinforcement Learning (DRL)-based SoC
Scheduler (DeepSoCS), capable of learning the "best" task ordering under
dynamic environment changes, overcomes the brittleness of rule-based schedulers
such as HEFT with significantly higher performance across different types of
jobs. We~describe a DeepSoCS design process using a real-time heterogeneous SoC
scheduling emulator, discuss major challenges, and present two novel neural
network design features that lead to outperforming HEFT: (i) hierarchical job-
and task-graph embedding; and (ii) efficient use of real-time task information
in the state space. Furthermore, we~introduce effective techniques to address
two fundamental challenges present in our environment: delayed consequences and
joint actions. Through an extensive simulation study, we~show that our DeepSoCS
exhibits the significantly higher performance of job execution time than that
of HEFT with a higher level of robustness under realistic noise conditions.
We~conclude with a discussion of the potential improvements for our DeepSoCS
neural scheduler.
Related papers
- Reinforcement Learning for Adaptive Resource Scheduling in Complex System Environments [8.315191578007857]
This study presents a novel computer system performance optimization and adaptive workload management scheduling algorithm based on Q-learning.
By contrast, Q-learning, a reinforcement learning algorithm, continuously learns from system state changes, enabling dynamic scheduling and resource optimization.
This research provides a foundation for the integration of AI-driven adaptive scheduling in future large-scale systems, offering a scalable, intelligent solution to enhance system performance, reduce operating costs, and support sustainable energy consumption.
arXiv Detail & Related papers (2024-11-08T05:58:09Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Intelligent Hybrid Resource Allocation in MEC-assisted RAN Slicing Network [72.2456220035229]
We aim to maximize the SSR for heterogeneous service demands in the cooperative MEC-assisted RAN slicing system.
We propose a recurrent graph reinforcement learning (RGRL) algorithm to intelligently learn the optimal hybrid RA policy.
arXiv Detail & Related papers (2024-05-02T01:36:13Z) - Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Scheduling Inference Workloads on Distributed Edge Clusters with
Reinforcement Learning [11.007816552466952]
This paper focuses on the problem of scheduling inference queries on Deep Neural Networks in edge networks at short timescales.
By means of simulations, we analyze several policies in the realistic network settings and workloads of a large ISP.
We design ASET, a Reinforcement Learning based scheduling algorithm able to adapt its decisions according to the system conditions.
arXiv Detail & Related papers (2023-01-31T13:23:34Z) - Deep Reinforcement Learning for System-on-Chip: Myths and Realities [0.0]
We investigate the feasibility of neural schedulers for the domain of System-on-Chip (SoC) resource allocation.
Our novel neural scheduler technique, Eclectic Interaction Matching (EIM), overcomes the above challenges.
arXiv Detail & Related papers (2022-07-29T10:26:38Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - Smart Scheduling based on Deep Reinforcement Learning for Cellular
Networks [18.04856086228028]
We propose a smart scheduling scheme based on deep reinforcement learning (DRL)
We provide implementation-friend designs, i.e., a scalable neural network design for the agent and a virtual environment training framework.
We show that the DRL-based smart scheduling outperforms the conventional scheduling method and can be adopted in practical systems.
arXiv Detail & Related papers (2021-03-22T02:09:16Z) - Learning to Schedule DAG Tasks [7.577417675452624]
We present a novel learning-based approach to scheduling directed acyclic graphs (DAGs)
The algorithm employs a reinforcement learning agent to iteratively add edges directed to the DAG.
Our approach can be easily applied to any existing scheduling algorithms.
arXiv Detail & Related papers (2021-03-05T01:10:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.