Neural Architecture Design for GPU-Efficient Networks
- URL: http://arxiv.org/abs/2006.14090v4
- Date: Tue, 11 Aug 2020 22:54:26 GMT
- Title: Neural Architecture Design for GPU-Efficient Networks
- Authors: Ming Lin, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, Rong Jin
- Abstract summary: We propose a general principle for designing GPU-efficient networks based on extensive empirical studies.
Based on the proposed framework, we design a family of GPU-Efficient Networks, or GENets in short.
While achieving $geq 81.3%$ top-1 accuracy on ImageNet, GENet is up to $6.4$ times faster than EfficienNet on GPU.
- Score: 27.07089149328155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many mission-critical systems are based on GPU for inference. It requires not
only high recognition accuracy but also low latency in responding time.
Although many studies are devoted to optimizing the structure of deep models
for efficient inference, most of them do not leverage the architecture of
\textbf{modern GPU} for fast inference, leading to suboptimal performance. To
address this issue, we propose a general principle for designing GPU-efficient
networks based on extensive empirical studies. This design principle enables us
to search for GPU-efficient network structures effectively by a simple and
lightweight method as opposed to most Neural Architecture Search (NAS) methods
that are complicated and computationally expensive. Based on the proposed
framework, we design a family of GPU-Efficient Networks, or GENets in short. We
did extensive evaluations on multiple GPU platforms and inference engines.
While achieving $\geq 81.3\%$ top-1 accuracy on ImageNet, GENet is up to $6.4$
times faster than EfficienNet on GPU. It also outperforms most state-of-the-art
models that are more efficient than EfficientNet in high precision regimes. Our
source code and pre-trained models are available from
\url{https://github.com/idstcv/GPU-Efficient-Networks}.
Related papers
- INR-Arch: A Dataflow Architecture and Compiler for Arbitrary-Order
Gradient Computations in Implicit Neural Representation Processing [66.00729477511219]
Given a function represented as a computation graph, traditional architectures face challenges in efficiently computing its nth-order gradient.
We introduce INR-Arch, a framework that transforms the computation graph of an nth-order gradient into a hardware-optimized dataflow architecture.
We present results that demonstrate 1.8-4.8x and 1.5-3.6x speedup compared to CPU and GPU baselines respectively.
arXiv Detail & Related papers (2023-08-11T04:24:39Z) - DistTGL: Distributed Memory-Based Temporal Graph Neural Network Training [18.52206409432894]
DistTGL is an efficient and scalable solution to train memory-based TGNNs on distributed GPU clusters.
In experiments, DistTGL achieves near-linear convergence speedup, outperforming state-of-the-art single-machine method by 14.5% in accuracy and 10.17x in training throughput.
arXiv Detail & Related papers (2023-07-14T22:52:27Z) - A Study on the Intersection of GPU Utilization and CNN Inference [8.084016058894779]
We show that there is room to improve the inference-time GPU utilization of convolutional neural network (CNN) inference.
Our study makes the case that there is room to improve the inference-time GPU utilization of CNNs and that knowledge of GPU utilization has the potential to benefit even applications that do not target utilization itself.
arXiv Detail & Related papers (2022-12-15T16:11:40Z) - NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search [55.75621026447599]
We propose NAS-Bench-Graph, a tailored benchmark that supports unified, reproducible, and efficient evaluations for GraphNAS.
Specifically, we construct a unified, expressive yet compact search space, covering 26,206 unique graph neural network (GNN) architectures.
Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation.
arXiv Detail & Related papers (2022-06-18T10:17:15Z) - Speedup deep learning models on GPU by taking advantage of efficient
unstructured pruning and bit-width reduction [0.0]
This work is focused on the pruning of some convolutional neural networks (CNNs) and improving theirs efficiency on graphic processing units ( GPU)
The Nvidia deep neural network (cuDnn) library is the most effective implementations of deep learning (DL) algorithms for GPUs.
arXiv Detail & Related papers (2021-12-28T19:36:41Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - Computational Performance Predictions for Deep Neural Network Training:
A Runtime-Based Approach [1.5857983167543392]
We present a new practical technique to help users make informed and cost-efficient GPU selections.
We make predictions by scaling the execution time of each operation in a training iteration from one GPU to another using either (i) wave scaling, a technique based on a GPU's execution model, or (ii) pre-trained multilayer perceptrons.
We implement our technique into a Python library called Surfer and find that it makes accurate iteration execution time predictions on ResNet-50, Inception v3, the Transformer, GNMT, and DCGAN.
arXiv Detail & Related papers (2021-01-31T20:17:46Z) - When deep learning models on GPU can be accelerated by taking advantage
of unstructured sparsity [0.0]
This paper is focused on the improvement the efficiency of the sparse convolutional neural networks (CNNs) layers on graphic processing units ( GPU)
The modern CNN models need megabytes of coefficients and needed millions MAC operations to perform convolution.
We show when is worth using a direct sparse operation to speed-up the calculation of the convolution layers.
arXiv Detail & Related papers (2020-11-12T10:13:48Z) - Off-Policy Reinforcement Learning for Efficient and Effective GAN
Architecture Search [50.40004966087121]
We introduce a new reinforcement learning based neural architecture search (NAS) methodology for generative adversarial network (GAN) architecture search.
The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling.
We exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies.
arXiv Detail & Related papers (2020-07-17T18:29:17Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.