FIKIT: Priority-Based Real-time GPU Multi-tasking Scheduling with Kernel
Identification
- URL: http://arxiv.org/abs/2311.10359v5
- Date: Thu, 1 Feb 2024 19:36:15 GMT
- Title: FIKIT: Priority-Based Real-time GPU Multi-tasking Scheduling with Kernel
Identification
- Authors: Wenqing Wu
- Abstract summary: In a cloud computing cluster, serving a GPU's computation power through multi-tasks sharing is highly demanded.
Existing GPU sharing solutions focus on reducing task-level waiting time or task-level switching costs when multiple jobs competing for a single GPU.
We present a novel kernel-level scheduling strategy called FIKIT: Filling Inter- Kernel Idle Time.
- Score: 2.9271819018953162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Highly parallelized workloads like machine learning training, inferences and
general HPC tasks are greatly accelerated using GPU devices. In a cloud
computing cluster, serving a GPU's computation power through multi-tasks
sharing is highly demanded since there are always more task requests than the
number of GPU available. Existing GPU sharing solutions focus on reducing
task-level waiting time or task-level switching costs when multiple jobs
competing for a single GPU. Non-stopped computation requests come with
different priorities, having non-symmetric impact on QoS for sharing a GPU
device. Existing work missed the kernel-level optimization opportunity brought
by this setting. To address this problem, we present a novel kernel-level
scheduling strategy called FIKIT: Filling Inter-kernel Idle Time. FIKIT
incorporates task-level priority information, fine-grained kernel
identification, and kernel measurement, allowing low priorities task's
execution during high priority task's inter-kernel idle time. Thereby, filling
the GPU's device runtime fully, and reduce overall GPU sharing impact to cloud
services. Across a set of ML models, the FIKIT based inference system
accelerated high priority tasks by 1.32 to 16.41 times compared to the JCT in
GPU sharing mode, and more than half of the cases are accelerated by more than
3.4 times. Alternatively, under preemptive sharing, the low-priority tasks have
a comparable to default GPU sharing mode JCT, with a 0.86 to 1 times ratio. We
further limit the kernel measurement and runtime fine-grained kernel scheduling
overhead to less than 5%.
Related papers
- FLUX: Fast Software-based Communication Overlap On GPUs Through Kernel Fusion [9.5114389643299]
This paper proposes a novel method, Flux, to significantly hide communication latencies with dependent computations for GPUs.
Flux can potentially overlap up to 96% of communication given a fused kernel.
Overall, it can achieve up to 1.24x speedups for training over Megatron-LM on a cluster of 128 GPU with various GPU generations and interconnects.
arXiv Detail & Related papers (2024-06-11T00:17:39Z) - FusionAI: Decentralized Training and Deploying LLMs with Massive
Consumer-Level GPUs [57.12856172329322]
We envision a decentralized system unlocking the potential vast untapped consumer-level GPU.
This system faces critical challenges, including limited CPU and GPU memory, low network bandwidth, the variability of peer and device heterogeneity.
arXiv Detail & Related papers (2023-09-03T13:27:56Z) - Partitioning Distributed Compute Jobs with Reinforcement Learning and
Graph Neural Networks [58.720142291102135]
Large-scale machine learning models are bringing advances to a broad range of fields.
Many of these models are too large to be trained on a single machine, and must be distributed across multiple devices.
We show that maximum parallelisation is sub-optimal in relation to user-critical metrics such as throughput and blocking rate.
arXiv Detail & Related papers (2023-01-31T17:41:07Z) - Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
Algorithm Co-design [66.39546326221176]
Attention-based neural networks have become pervasive in many AI tasks.
The use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources.
This paper proposes a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs.
arXiv Detail & Related papers (2022-09-20T09:28:26Z) - PLSSVM: A (multi-)GPGPU-accelerated Least Squares Support Vector Machine [68.8204255655161]
Support Vector Machines (SVMs) are widely used in machine learning.
However, even modern and optimized implementations do not scale well for large non-trivial dense data sets on cutting-edge hardware.
PLSSVM can be used as a drop-in replacement for an LVM.
arXiv Detail & Related papers (2022-02-25T13:24:23Z) - SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive
Knowledge Graphs [147.73127662757335]
We present scalable Multi-hOp REasoning (SMORE), the first general framework for both single-hop and multi-hop reasoning in Knowledge Graphs (KGs)
Using a single machine SMORE can perform multi-hop reasoning in Freebase KG (86M entities, 338M edges), which is 1,500x larger than previously considered KGs.
SMORE increases throughput (i.e., training speed) over prior multi-hop KG frameworks by 2.2x with minimal GPU memory requirements.
arXiv Detail & Related papers (2021-10-28T05:02:33Z) - Adaptive Elastic Training for Sparse Deep Learning on Heterogeneous
Multi-GPU Servers [65.60007071024629]
We show that Adaptive SGD outperforms four state-of-the-art solutions in time-to-accuracy.
We show experimentally that Adaptive SGD outperforms four state-of-the-art solutions in time-to-accuracy.
arXiv Detail & Related papers (2021-10-13T20:58:15Z) - RTGPU: Real-Time GPU Scheduling of Hard Deadline Parallel Tasks with
Fine-Grain Utilization [5.02836935036198]
We propose RTGPU, which can schedule the execution of multiple GPU applications in real-time to meet hard deadlines.
Our approach provides superior schedulability compared with previous work, and gives real-time guarantees to meet hard deadlines for multiple GPU applications.
arXiv Detail & Related papers (2021-01-25T22:34:06Z) - Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning [7.43260596107574]
We propose Nimble, a deep learning (DL) execution engine that runs tasks in parallel with minimal scheduling overhead.
Nable automatically parallelizes the execution of GPU tasks by exploiting multiple GPU streams in a single GPU.
evaluation on a variety of neural networks shows that compared to PyTorch, Nimble speeds up inference and training by up to 22.34$times$ and 3.61$times$, respectively.
arXiv Detail & Related papers (2020-12-04T17:25:46Z) - GPU-Accelerated Primal Learning for Extremely Fast Large-Scale
Classification [10.66048003460524]
One of the most efficient methods to solve L2-regularized primal problems, such as logistic regression and linear support vector machine (SVM) classification, is the widely used trust region Newton algorithm, TRON.
We show that using judicious GPU-optimization principles, TRON training time for different losses and feature representations may be drastically reduced.
arXiv Detail & Related papers (2020-08-08T03:40:27Z) - Heterogeneous CPU+GPU Stochastic Gradient Descent Algorithms [1.3249453757295084]
We study training algorithms for deep learning on heterogeneous CPU+GPU architectures.
Our two-fold objective -- maximize convergence rate and resource utilization simultaneously -- makes the problem challenging.
We show that the implementation of these algorithms achieves both faster convergence and higher resource utilization than on several real datasets.
arXiv Detail & Related papers (2020-04-19T05:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.