HDCompression: Hybrid-Diffusion Image Compression for Ultra-Low Bitrates
- URL: http://arxiv.org/abs/2502.07160v2
- Date: Wed, 19 Feb 2025 03:43:57 GMT
- Title: HDCompression: Hybrid-Diffusion Image Compression for Ultra-Low Bitrates
- Authors: Lei Lu, Yize Li, Yanzhi Wang, Wei Wang, Wei Jiang,
- Abstract summary: Hybrid-Diffusion Image Compression (HDCompression) is a dual-stream framework that utilizes both generative VQ-modeling and diffusion models.<n>Our experiments demonstrate that our HDCompression outperforms the previous conventional LIC, generative VQ-modeling, and hybrid frameworks.
- Score: 35.28538714213459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image compression under ultra-low bitrates remains challenging for both conventional learned image compression (LIC) and generative vector-quantized (VQ) modeling. Conventional LIC suffers from severe artifacts due to heavy quantization, while generative VQ modeling gives poor fidelity due to the mismatch between learned generative priors and specific inputs. In this work, we propose Hybrid-Diffusion Image Compression (HDCompression), a dual-stream framework that utilizes both generative VQ-modeling and diffusion models, as well as conventional LIC, to achieve both high fidelity and high perceptual quality. Different from previous hybrid methods that directly use pre-trained LIC models to generate low-quality fidelity-preserving information from heavily quantized latent, we use diffusion models to extract high-quality complimentary fidelity information from the ground-truth input, which can enhance the system performance in several aspects: improving indices map prediction, enhancing the fidelity-preserving output of the LIC stream, and refining conditioned image reconstruction with VQ-latent correction. In addition, our diffusion model is based on a dense representative vector (DRV), which is lightweight with very simple sampling schedulers. Extensive experiments demonstrate that our HDCompression outperforms the previous conventional LIC, generative VQ-modeling, and hybrid frameworks in both quantitative metrics and qualitative visualization, providing balanced robust compression performance at ultra-low bitrates.
Related papers
- HPGN: Hybrid Priors-Guided Network for Compressed Low-Light Image Enhancement [5.93853008544606]
We propose a hybrid priors-guided network (HPGN) to enhance compressed low-light images.
Our approach fully utilizes the JPEG quality factor (QF) and DCT quantization matrix (QM) to guide the design of efficient joint task plug-and-play modules.
arXiv Detail & Related papers (2025-04-03T08:06:24Z) - Multi-Scale Invertible Neural Network for Wide-Range Variable-Rate Learned Image Compression [90.59962443790593]
In this paper, we present a variable-rate image compression model based on invertible transform to overcome limitations.
Specifically, we design a lightweight multi-scale invertible neural network, which maps the input image into multi-scale latent representations.
Experimental results demonstrate that the proposed method achieves state-of-the-art performance compared to existing variable-rate methods.
arXiv Detail & Related papers (2025-03-27T09:08:39Z) - Progressive Compression with Universally Quantized Diffusion Models [35.199627388957566]
We explore the potential of diffusion models for progressive coding, resulting in a sequence of bits that can be incrementally transmitted and decoded.<n>Unlike prior work based on Gaussian diffusion or conditional diffusion models, we propose a new form of diffusion model with uniform noise in the forward process.<n>We obtain promising first results on image compression, achieving competitive rate-distortion and rate-realism results on a wide range of bit-rates with a single model.
arXiv Detail & Related papers (2024-12-14T19:06:01Z) - High Frequency Matters: Uncertainty Guided Image Compression with Wavelet Diffusion [35.168244436206685]
We propose an efficient Uncertainty-Guided image compression approach with wavelet Diffusion (UGDiff)
Our approach focuses on high frequency compression via the wavelet transform, since high frequency components are crucial for reconstructing image details.
Comprehensive experiments on two benchmark datasets validate the effectiveness of UGDiff.
arXiv Detail & Related papers (2024-07-17T13:21:31Z) - Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption [52.82508784748278]
This paper proposes a Control Generative Image Compression framework, termed Control-GIC.<n>Control-GIC is capable of fine-grained adaption across a broad spectrum while ensuring high-fidelity and generality compression.<n>We develop a conditional decoder capable of retrieving historic multi-granularity representations according to encoded codes, and then reconstruct hierarchical features in the formalization of conditional probability.
arXiv Detail & Related papers (2024-06-02T14:22:09Z) - Unifying Generation and Compression: Ultra-low bitrate Image Coding Via
Multi-stage Transformer [35.500720262253054]
This paper introduces a novel Unified Image Generation-Compression (UIGC) paradigm, merging the processes of generation and compression.
A key feature of the UIGC framework is the adoption of vector-quantized (VQ) image models for tokenization.
Experiments demonstrate the superiority of the proposed UIGC framework over existing codecs in perceptual quality and human perception.
arXiv Detail & Related papers (2024-03-06T14:27:02Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - A Residual Diffusion Model for High Perceptual Quality Codec
Augmentation [1.868930790098705]
Diffusion probabilistic models have recently achieved remarkable success in generating high quality image and video data.
In this work, we build on this class of generative models and introduce a method for lossy compression of high resolution images.
We show that while sampling from diffusion probabilistic models is notoriously expensive, we show that in the compression setting the number of steps can be drastically reduced.
arXiv Detail & Related papers (2023-01-13T11:27:26Z) - Lossy Image Compression with Conditional Diffusion Models [25.158390422252097]
This paper outlines an end-to-end optimized lossy image compression framework using diffusion generative models.
In contrast to VAE-based neural compression, where the (mean) decoder is a deterministic neural network, our decoder is a conditional diffusion model.
Our approach yields stronger reported FID scores than the GAN-based model, while also yielding competitive performance with VAE-based models in several distortion metrics.
arXiv Detail & Related papers (2022-09-14T21:53:27Z) - Learned Video Compression via Heterogeneous Deformable Compensation
Network [78.72508633457392]
We propose a learned video compression framework via heterogeneous deformable compensation strategy (HDCVC) to tackle the problems of unstable compression performance.
More specifically, the proposed algorithm extracts features from the two adjacent frames to estimate content-Neighborhood heterogeneous deformable (HetDeform) kernel offsets.
Experimental results indicate that HDCVC achieves superior performance than the recent state-of-the-art learned video compression approaches.
arXiv Detail & Related papers (2022-07-11T02:31:31Z) - Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
Compressed Images [54.40852143927333]
Lossy image compression is pervasively conducted to save communication bandwidth, resulting in undesirable compression artifacts.
We propose a resource-efficient blind quality enhancement (RBQE) approach for compressed images.
Our approach can automatically decide to terminate or continue enhancement according to the assessed quality of enhanced images.
arXiv Detail & Related papers (2020-06-30T07:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.