Fidelity-Controllable Extreme Image Compression with Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2008.10314v1
- Date: Mon, 24 Aug 2020 10:45:19 GMT
- Title: Fidelity-Controllable Extreme Image Compression with Generative
Adversarial Networks
- Authors: Shoma Iwai, Tomo Miyazaki, Yoshihiro Sugaya and Shinichiro Omachi
- Abstract summary: We propose a GAN-based image compression method working at extremely lows below 0.1bpp.
To address both of the drawbacks, our method adopts two-stage training and network.
The experimental results show that our model can reconstruct high quality images.
- Score: 10.036312061637764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a GAN-based image compression method working at extremely low
bitrates below 0.1bpp. Most existing learned image compression methods suffer
from blur at extremely low bitrates. Although GAN can help to reconstruct sharp
images, there are two drawbacks. First, GAN makes training unstable. Second,
the reconstructions often contain unpleasing noise or artifacts. To address
both of the drawbacks, our method adopts two-stage training and network
interpolation. The two-stage training is effective to stabilize the training.
Moreover, the network interpolation utilizes the models in both stages and
reduces undesirable noise and artifacts, while maintaining important edges.
Hence, we can control the trade-off between perceptual quality and fidelity
without re-training models. The experimental results show that our model can
reconstruct high quality images. Furthermore, our user study confirms that our
reconstructions are preferable to state-of-the-art GAN-based image compression
model. The code will be available.
Related papers
- Map-Assisted Remote-Sensing Image Compression at Extremely Low Bitrates [47.47031054057152]
Generative models have been explored to compress RS images into extremely low-bitrate streams.
These generative models struggle to reconstruct visually plausible images due to the highly ill-posed nature of extremely low-bitrate image compression.
We propose an image compression framework that utilizes a pre-trained diffusion model with powerful natural image priors to achieve high-realism reconstructions.
arXiv Detail & Related papers (2024-09-03T14:29:54Z) - Towards Extreme Image Compression with Latent Feature Guidance and Diffusion Prior [8.772652777234315]
We propose a novel two-stage extreme image compression framework that exploits the powerful generative capability of pre-trained diffusion models.
Our method significantly outperforms state-of-the-art approaches in terms of visual performance at extremely lows.
arXiv Detail & Related papers (2024-04-29T16:02:38Z) - A Training-Free Defense Framework for Robust Learned Image Compression [48.41990144764295]
We study the robustness of learned image compression models against adversarial attacks.
We present a training-free defense technique based on simple image transform functions.
arXiv Detail & Related papers (2024-01-22T12:50:21Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - High-Fidelity Variable-Rate Image Compression via Invertible Activation
Transformation [24.379052026260034]
We propose the Invertible Activation Transformation (IAT) module to tackle the issue of high-fidelity fine variable-rate image compression.
IAT and QLevel together give the image compression model the ability of fine variable-rate control while better maintaining the image fidelity.
Our method outperforms the state-of-the-art variable-rate image compression method by a large margin, especially after multiple re-encodings.
arXiv Detail & Related papers (2022-09-12T07:14:07Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - The Devil Is in the Details: Window-based Attention for Image
Compression [58.1577742463617]
Most existing learned image compression models are based on Convolutional Neural Networks (CNNs)
In this paper, we study the effects of multiple kinds of attention mechanisms for local features learning, then introduce a more straightforward yet effective window-based local attention block.
The proposed window-based attention is very flexible which could work as a plug-and-play component to enhance CNN and Transformer models.
arXiv Detail & Related papers (2022-03-16T07:55:49Z) - Deep Artifact-Free Residual Network for Single Image Super-Resolution [0.2399911126932526]
We propose Deep Artifact-Free Residual (DAFR) network which uses the merits of both residual learning and usage of ground-truth image as target.
Our framework uses a deep model to extract the high-frequency information which is necessary for high-quality image reconstruction.
Our experimental results show that the proposed method achieves better quantitative and qualitative image quality compared to the existing methods.
arXiv Detail & Related papers (2020-09-25T20:53:55Z) - GAN Compression: Efficient Architectures for Interactive Conditional
GANs [45.012173624111185]
Recent Conditional Generative Adversarial Networks (cGANs) are 1-2 orders of magnitude more compute-intensive than modern recognition CNNs.
We propose a general-purpose compression framework for reducing the inference time and model size of the generator in cGANs.
arXiv Detail & Related papers (2020-03-19T17:59:05Z) - Blur, Noise, and Compression Robust Generative Adversarial Networks [85.68632778835253]
We propose blur, noise, and compression robust GAN (BNCR-GAN) to learn a clean image generator directly from degraded images.
Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur- Kernel, noise, and quality-factor generators.
We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ.
arXiv Detail & Related papers (2020-03-17T17:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.