A Residual Diffusion Model for High Perceptual Quality Codec
Augmentation
- URL: http://arxiv.org/abs/2301.05489v3
- Date: Wed, 29 Mar 2023 16:13:22 GMT
- Title: A Residual Diffusion Model for High Perceptual Quality Codec
Augmentation
- Authors: Noor Fathima Ghouse and Jens Petersen and Auke Wiggers and Tianlin Xu
and Guillaume Sauti\`ere
- Abstract summary: Diffusion probabilistic models have recently achieved remarkable success in generating high quality image and video data.
In this work, we build on this class of generative models and introduce a method for lossy compression of high resolution images.
We show that while sampling from diffusion probabilistic models is notoriously expensive, we show that in the compression setting the number of steps can be drastically reduced.
- Score: 1.868930790098705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion probabilistic models have recently achieved remarkable success in
generating high quality image and video data. In this work, we build on this
class of generative models and introduce a method for lossy compression of high
resolution images. The resulting codec, which we call DIffuson-based Residual
Augmentation Codec (DIRAC), is the first neural codec to allow smooth traversal
of the rate-distortion-perception tradeoff at test time, while obtaining
competitive performance with GAN-based methods in perceptual quality.
Furthermore, while sampling from diffusion probabilistic models is notoriously
expensive, we show that in the compression setting the number of steps can be
drastically reduced.
Related papers
- Sample what you cant compress [6.24979299238534]
We show how to learn a continuous encoder and decoder under a diffusion-based loss.
This approach yields better reconstruction quality as compared to GAN-based autoencoders.
We also show that the resulting representation is easier to model with a latent diffusion model as compared to the representation obtained from a state-of-the-art GAN-based loss.
arXiv Detail & Related papers (2024-09-04T08:42:42Z) - High Frequency Matters: Uncertainty Guided Image Compression with Wavelet Diffusion [35.168244436206685]
We propose an efficient Uncertainty-Guided image compression approach with wavelet Diffusion (UGDiff)
Our approach focuses on high frequency compression via the wavelet transform, since high frequency components are crucial for reconstructing image details.
Comprehensive experiments on two benchmark datasets validate the effectiveness of UGDiff.
arXiv Detail & Related papers (2024-07-17T13:21:31Z) - Lossy Image Compression with Foundation Diffusion Models [10.407650300093923]
In this work we formulate the removal of quantization error as a denoising task, using diffusion to recover lost information in the transmitted image latent.
Our approach allows us to perform less than 10% of the full diffusion generative process and requires no architectural changes to the diffusion model.
arXiv Detail & Related papers (2024-04-12T16:23:42Z) - Correcting Diffusion-Based Perceptual Image Compression with Privileged End-to-End Decoder [49.01721042973929]
This paper presents a diffusion-based image compression method that employs a privileged end-to-end decoder model as correction.
Experiments demonstrate the superiority of our method in both distortion and perception compared with previous perceptual compression methods.
arXiv Detail & Related papers (2024-04-07T10:57:54Z) - Enhancing the Rate-Distortion-Perception Flexibility of Learned Image
Codecs with Conditional Diffusion Decoders [7.485128109817576]
We show that conditional diffusion models can lead to promising results in the generative compression task when used as a decoder.
In this paper, we show that conditional diffusion models can lead to promising results in the generative compression task when used as a decoder.
arXiv Detail & Related papers (2024-03-05T11:48:35Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - Diffusion Models as Masked Autoencoders [52.442717717898056]
We revisit generatively pre-training visual representations in light of recent interest in denoising diffusion models.
While directly pre-training with diffusion models does not produce strong representations, we condition diffusion models on masked input and formulate diffusion models as masked autoencoders (DiffMAE)
We perform a comprehensive study on the pros and cons of design choices and build connections between diffusion models and masked autoencoders.
arXiv Detail & Related papers (2023-04-06T17:59:56Z) - Q-Diffusion: Quantizing Diffusion Models [52.978047249670276]
Post-training quantization (PTQ) is considered a go-to compression method for other tasks.
We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture.
We show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance.
arXiv Detail & Related papers (2023-02-08T19:38:59Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Denoising Diffusion Probabilistic Models [91.94962645056896]
We present high quality image synthesis results using diffusion probabilistic models.
Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics.
arXiv Detail & Related papers (2020-06-19T17:24:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.