A Survey on GAN Acceleration Using Memory Compression Technique
- URL: http://arxiv.org/abs/2108.06626v1
- Date: Sat, 14 Aug 2021 23:03:14 GMT
- Title: A Survey on GAN Acceleration Using Memory Compression Technique
- Authors: Dina Tantawy, Mohamed Zahran, Amr Wassal
- Abstract summary: Generative adversarial networks (GANs) have shown outstanding results in many applications.
This paper surveys memory compression techniques for CNN-Based GANs.
- Score: 1.6758573326215689
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since its invention, Generative adversarial networks (GANs) have shown
outstanding results in many applications. Generative Adversarial Networks are
powerful yet, resource-hungry deep-learning models. Their main difference from
ordinary deep learning models is the nature of their output. For example, GAN
output can be a whole image versus other models detecting objects or
classifying images. Thus, the architecture and numeric precision of the network
affect the quality and speed of the solution. Hence, accelerating GANs is
pivotal. Accelerating GANs can be classified into three main tracks: (1) Memory
compression, (2) Computation optimization, and (3) Data-flow optimization.
Because data transfer is the main source of energy usage, memory compression
leads to the most savings. Thus, in this paper, we survey memory compression
techniques for CNN-Based GANs. Additionally, the paper summarizes opportunities
and challenges in GANs acceleration and suggests open research problems to be
further investigated.
Related papers
- Active search and coverage using point-cloud reinforcement learning [50.741409008225766]
This paper presents an end-to-end deep reinforcement learning solution for target search and coverage.
We show that deep hierarchical feature learning works for RL and that by using farthest point sampling (FPS) we can reduce the amount of points.
We also show that multi-head attention for point-clouds helps to learn the agent faster but converges to the same outcome.
arXiv Detail & Related papers (2023-12-18T18:16:30Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - STIP: A SpatioTemporal Information-Preserving and Perception-Augmented
Model for High-Resolution Video Prediction [78.129039340528]
We propose a Stemporal Information-Preserving and Perception-Augmented Model (STIP) to solve the above two problems.
The proposed model aims to preserve thetemporal information for videos during the feature extraction and the state transitions.
Experimental results show that the proposed STIP can predict videos with more satisfactory visual quality compared with a variety of state-of-the-art methods.
arXiv Detail & Related papers (2022-06-09T09:49:04Z) - Machine Learning in NextG Networks via Generative Adversarial Networks [6.045977607688583]
Generative Adversarial Networks (GANs) are Machine Learning (ML) algorithms that have the ability to address competitive resource allocation problems.
We investigate their use in next-generation (NextG) communications within the context of cognitive networks to address i) spectrum sharing, ii) detecting anomalies, and iv) mitigating security attacks.
arXiv Detail & Related papers (2022-03-09T00:15:34Z) - Learning-Driven Lossy Image Compression; A Comprehensive Survey [3.1761172592339375]
This paper aims to survey recent techniques utilizing mostly lossy image compression using machine learning (ML) architectures.
We divide all of the algorithms into several groups based on architecture.
Various discoveries for the researchers are emphasized and possible future directions for researchers.
arXiv Detail & Related papers (2022-01-23T12:11:31Z) - a novel attention-based network for fast salient object detection [14.246237737452105]
In the current salient object detection network, the most popular method is using U-shape structure.
We propose a new deep convolution network architecture with three contributions.
Results demonstrate that the proposed method can compress the model to 1/3 of the original size nearly without losing the accuracy.
arXiv Detail & Related papers (2021-12-20T12:30:20Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - A Variational Information Bottleneck Based Method to Compress Sequential
Networks for Human Action Recognition [9.414818018857316]
We propose a method to effectively compress Recurrent Neural Networks (RNNs) used for Human Action Recognition (HAR)
We use a Variational Information Bottleneck (VIB) theory-based pruning approach to limit the information flow through the sequential cells of RNNs to a small subset.
We combine our pruning method with a specific group-lasso regularization technique that significantly improves compression.
It is shown that our method achieves over 70 times greater compression than the nearest competitor with comparable accuracy for the task of action recognition on UCF11.
arXiv Detail & Related papers (2020-10-03T12:41:51Z) - GAN Slimming: All-in-One GAN Compression by A Unified Optimization
Framework [94.26938614206689]
We propose the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming.
We apply GS to compress CartoonGAN, a state-of-the-art style transfer network, by up to 47 times, with minimal visual quality degradation.
arXiv Detail & Related papers (2020-08-25T14:39:42Z) - AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks [68.58179110398439]
Existing GAN compression algorithms are limited to handling specific GAN architectures and losses.
Inspired by the recent success of AutoML in deep compression, we introduce AutoML to GAN compression and develop an AutoGAN-Distiller framework.
We evaluate AGD in two representative GAN tasks: image translation and super resolution.
arXiv Detail & Related papers (2020-06-15T07:56:24Z) - GAN Compression: Efficient Architectures for Interactive Conditional
GANs [45.012173624111185]
Recent Conditional Generative Adversarial Networks (cGANs) are 1-2 orders of magnitude more compute-intensive than modern recognition CNNs.
We propose a general-purpose compression framework for reducing the inference time and model size of the generator in cGANs.
arXiv Detail & Related papers (2020-03-19T17:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.