GAN Slimming: All-in-One GAN Compression by A Unified Optimization
Framework
- URL: http://arxiv.org/abs/2008.11062v1
- Date: Tue, 25 Aug 2020 14:39:42 GMT
- Title: GAN Slimming: All-in-One GAN Compression by A Unified Optimization
Framework
- Authors: Haotao Wang, Shupeng Gui, Haichuan Yang, Ji Liu, Zhangyang Wang
- Abstract summary: We propose the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming.
We apply GS to compress CartoonGAN, a state-of-the-art style transfer network, by up to 47 times, with minimal visual quality degradation.
- Score: 94.26938614206689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have gained increasing popularity in
various computer vision applications, and recently start to be deployed to
resource-constrained mobile devices. Similar to other deep models,
state-of-the-art GANs suffer from high parameter complexities. That has
recently motivated the exploration of compressing GANs (usually generators).
Compared to the vast literature and prevailing success in compressing deep
classifiers, the study of GAN compression remains in its infancy, so far
leveraging individual compression techniques instead of more sophisticated
combinations. We observe that due to the notorious instability of training
GANs, heuristically stacking different compression techniques will result in
unsatisfactory results. To this end, we propose the first unified optimization
framework combining multiple compression means for GAN compression, dubbed GAN
Slimming (GS). GS seamlessly integrates three mainstream compression
techniques: model distillation, channel pruning and quantization, together with
the GAN minimax objective, into one unified optimization form, that can be
efficiently optimized from end to end. Without bells and whistles, GS largely
outperforms existing options in compressing image-to-image translation GANs.
Specifically, we apply GS to compress CartoonGAN, a state-of-the-art style
transfer network, by up to 47 times, with minimal visual quality degradation.
Codes and pre-trained models can be found at
https://github.com/TAMU-VITA/GAN-Slimming.
Related papers
- Fast Feedforward 3D Gaussian Splatting Compression [55.149325473447384]
3D Gaussian Splatting (FCGS) is an optimization-free model that can compress 3DGS representations rapidly in a single feed-forward pass.
FCGS achieves a compression ratio of over 20X while maintaining fidelity, surpassing most per-scene SOTA optimization-based methods.
arXiv Detail & Related papers (2024-10-10T15:13:08Z) - Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN [5.25545980258284]
We propose a systematic and optimal sequence to apply multiple compression techniques in the most effective order.
Our proposed Order of Compression significantly reduces computational costs by up to 859 times on ResNet34, with negligible accuracy loss.
We believe our simple yet effective exploration of the order of compression will shed light on the practice of model compression.
arXiv Detail & Related papers (2024-03-26T07:26:00Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - You Only Compress Once: Towards Effective and Elastic BERT Compression
via Exploit-Explore Stochastic Nature Gradient [88.58536093633167]
Existing model compression approaches require re-compression or fine-tuning across diverse constraints to accommodate various hardware deployments.
We propose a novel approach, YOCO-BERT, to achieve compress once and deploy everywhere.
Compared with state-of-the-art algorithms, YOCO-BERT provides more compact models, yet achieving 2.1%-4.5% average accuracy improvement on the GLUE benchmark.
arXiv Detail & Related papers (2021-06-04T12:17:44Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - Content-Aware GAN Compression [33.83749494060526]
Generative adversarial networks (GANs) play a vital role in various image generation and synthesis tasks.
Their notoriously high computational cost hinders their efficient deployment on edge devices.
We propose novel approaches for unconditional GAN compression.
arXiv Detail & Related papers (2021-04-06T02:23:56Z) - Self-Supervised GAN Compression [32.21713098893454]
We show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods.
We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator.
We show that this framework has a compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different pruning granularities.
arXiv Detail & Related papers (2020-07-03T04:18:54Z) - AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks [98.71508718214935]
Existing GAN compression algorithms are limited to handling specific GAN architectures and losses.
Inspired by the recent success of AutoML in deep compression, we introduce AutoML to GAN compression and develop an AutoGAN-Distiller framework.
We evaluate AGD in two representative GAN tasks: image translation and super resolution.
arXiv Detail & Related papers (2020-06-15T07:56:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.