MFAGAN: A Compression Framework for Memory-Efficient On-Device
Super-Resolution GAN
- URL: http://arxiv.org/abs/2107.12679v1
- Date: Tue, 27 Jul 2021 09:04:30 GMT
- Title: MFAGAN: A Compression Framework for Memory-Efficient On-Device
Super-Resolution GAN
- Authors: Wenlong Cheng and Mingbo Zhao and Zhiling Ye and Shuhang Gu
- Abstract summary: We propose a novel compression framework textbfMulti-scale textbfFeature textbfAggregation Net based textbfGAN (MFAGAN) for reducing the memory access cost of the generator.
MFAGAN achieves up to textbf8.3$times$ memory saving and textbf42.9$times$ computation reduction, compared with ESRGAN.
- Score: 27.346272886257335
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Generative adversarial networks (GANs) have promoted remarkable advances in
single-image super-resolution (SR) by recovering photo-realistic images.
However, high memory consumption of GAN-based SR (usually generators) causes
performance degradation and more energy consumption, hindering the deployment
of GAN-based SR into resource-constricted mobile devices. In this paper, we
propose a novel compression framework \textbf{M}ulti-scale \textbf{F}eature
\textbf{A}ggregation Net based \textbf{GAN} (MFAGAN) for reducing the memory
access cost of the generator. First, to overcome the memory explosion of dense
connections, we utilize a memory-efficient multi-scale feature aggregation net
as the generator. Second, for faster and more stable training, our method
introduces the PatchGAN discriminator. Third, to balance the student
discriminator and the compressed generator, we distill both the generator and
the discriminator. Finally, we perform a hardware-aware neural architecture
search (NAS) to find a specialized SubGenerator for the target mobile phone.
Benefiting from these improvements, the proposed MFAGAN achieves up to
\textbf{8.3}$\times$ memory saving and \textbf{42.9}$\times$ computation
reduction, with only minor visual quality degradation, compared with ESRGAN.
Empirical studies also show $\sim$\textbf{70} milliseconds latency on Qualcomm
Snapdragon 865 chipset.
Related papers
- MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table [62.164549651134465]
We propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality.
Our experiments with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.
arXiv Detail & Related papers (2023-04-25T05:44:50Z) - GRAN: Ghost Residual Attention Network for Single Image Super Resolution [44.4178326950426]
This paper introduces Ghost Residual Attention Block (GRAB) groups to overcome the drawbacks of the standard convolutional operation.
Ghost Module can reveal information underlying intrinsic features by employing linear operations to replace the standard convolutions.
Experiments conducted on the benchmark datasets demonstrate the superior performance of our method in both qualitative and quantitative.
arXiv Detail & Related papers (2023-02-28T13:26:24Z) - Fast and Memory-Efficient Network Towards Efficient Image
Super-Resolution [44.909233016062906]
We build a memory-efficient image super-resolution network (FMEN) for resource-constrained devices.
FMEN runs 33% faster and reduces 74% memory consumption compared with the state-of-the-art EISR model: E-RFDN.
FMEN-S achieves the lowest memory consumption and the second shortest runtime in NTIRE 2022 challenge on efficient super-resolution.
arXiv Detail & Related papers (2022-04-18T16:49:20Z) - Self-Gated Memory Recurrent Network for Efficient Scalable HDR
Deghosting [59.04604001936661]
We propose a novel recurrent network-based HDR deghosting method for fusing arbitrary length dynamic sequences.
We introduce a new recurrent cell architecture, namely Self-Gated Memory (SGM) cell, that outperforms the standard LSTM cell.
The proposed approach achieves state-of-the-art performance compared to existing HDR deghosting methods quantitatively across three publicly available datasets.
arXiv Detail & Related papers (2021-12-24T12:36:33Z) - Generative Optimization Networks for Memory Efficient Data Generation [11.452816167207937]
We propose a novel framework called generative optimization networks (GON) that is similar to GANs, but does not use a generator.
GONs use a single discriminator network and run optimization in the input space to generate new data samples, achieving an effective compromise between training time and memory consumption.
We show that our framework gives up to 32% higher detection F1 scores and 58% lower memory consumption, with only 5% higher training overheads compared to the state-of-the-art.
arXiv Detail & Related papers (2021-10-06T16:54:33Z) - Online Multi-Granularity Distillation for GAN Compression [17.114017187236836]
Generative Adversarial Networks (GANs) have witnessed prevailing success in yielding outstanding images.
GANs are burdensome to deploy on resource-constrained devices due to ponderous computational costs and hulking memory usage.
We propose a novel online multi-granularity distillation scheme to obtain lightweight GANs.
arXiv Detail & Related papers (2021-08-16T05:49:50Z) - GAN Slimming: All-in-One GAN Compression by A Unified Optimization
Framework [94.26938614206689]
We propose the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming.
We apply GS to compress CartoonGAN, a state-of-the-art style transfer network, by up to 47 times, with minimal visual quality degradation.
arXiv Detail & Related papers (2020-08-25T14:39:42Z) - AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks [98.71508718214935]
Existing GAN compression algorithms are limited to handling specific GAN architectures and losses.
Inspired by the recent success of AutoML in deep compression, we introduce AutoML to GAN compression and develop an AutoGAN-Distiller framework.
We evaluate AGD in two representative GAN tasks: image translation and super resolution.
arXiv Detail & Related papers (2020-06-15T07:56:24Z) - Blur, Noise, and Compression Robust Generative Adversarial Networks [85.68632778835253]
We propose blur, noise, and compression robust GAN (BNCR-GAN) to learn a clean image generator directly from degraded images.
Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur- Kernel, noise, and quality-factor generators.
We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ.
arXiv Detail & Related papers (2020-03-17T17:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.