Ultra-Low Bitrate Perceptual Image Compression with Shallow Encoder
- URL: http://arxiv.org/abs/2512.12229v1
- Date: Sat, 13 Dec 2025 07:59:48 GMT
- Title: Ultra-Low Bitrate Perceptual Image Compression with Shallow Encoder
- Authors: Tianyu Zhang, Dong Liu, Chang Wen Chen,
- Abstract summary: We propose a novel Asymmetric Extreme Image Compression (AEIC) framework that pursues simultaneously encoding simplicity and decoding quality.<n>AEIC employs moderate or even shallow encoder networks, while leveraging an one-step diffusion decoder to maintain high-fidelity and high-realism.<n>Experiments demonstrate that AEIC not only outperforms existing methods on rate-distortion-perception performance at ultra-lows, but also delivers exceptional encoding efficiency for 35.8 FPS on 1080P input images.
- Score: 41.0853762937392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultra-low bitrate image compression (below 0.05 bits per pixel) is increasingly critical for bandwidth-constrained and computation-limited encoding scenarios such as edge devices. Existing frameworks typically rely on large pretrained encoders (e.g., VAEs or tokenizer-based models) and perform transform coding within their generative latent space. While these approaches achieve impressive perceptual fidelity, their reliance on heavy encoder networks makes them unsuitable for deployment on weak sender devices. In this work, we explore the feasibility of applying shallow encoders for ultra-low bitrate compression and propose a novel Asymmetric Extreme Image Compression (AEIC) framework that pursues simultaneously encoding simplicity and decoding quality. Specifically, AEIC employs moderate or even shallow encoder networks, while leveraging an one-step diffusion decoder to maintain high-fidelity and high-realism reconstructions under extreme bitrates. To further enhance the efficiency of shallow encoders, we design a dual-side feature distillation scheme that transfers knowledge from AEIC with moderate encoders to its shallow encoder variants. Experiments demonstrate that AEIC not only outperforms existing methods on rate-distortion-perception performance at ultra-low bitrates, but also delivers exceptional encoding efficiency for 35.8 FPS on 1080P input images, while maintaining competitive decoding speed compared to existing methods.
Related papers
- ProGIC: Progressive and Lightweight Generative Image Compression with Residual Vector Quantization [59.481950697968706]
We propose Progressive Generative Image Compression (ProGIC), a compact built on residual vector quantization (RVQ)<n>In RVQ, a sequence of vector quantizers encodes the residuals stage by stage, each with its own codebook.<n>We pair this with a lightweight backbone based on depthwise-separable convolutions and small attention blocks, enabling practical deployment on both GPU and CPU-only devices.
arXiv Detail & Related papers (2026-03-03T11:47:05Z) - SIEDD: Shared-Implicit Encoder with Discrete Decoders [36.705337163276255]
Implicit Neural Representations (INRs) offer exceptional fidelity for video compression by learning per-video optimized functions.<n>Existing attempts to accelerate INR encoding often sacrifice reconstruction quality or crucial coordinate-level control.<n>We introduce SIEDD, a novel architecture that fundamentally accelerates INR encoding without these compromises.
arXiv Detail & Related papers (2025-06-29T19:39:43Z) - StableCodec: Taming One-Step Diffusion for Extreme Image Compression [19.69733852050049]
Diffusion-based image compression has shown remarkable potential for achieving ultra-low coding (less than 0.05 bits per pixel) with high realism.<n>Current approaches require a large number of denoising steps at the decoder to generate realistic results under extreme constraints.<n>We introduce StableCodec, which enables one-step diffusion for high-fidelity and high-realism extreme image compression.
arXiv Detail & Related papers (2025-06-27T07:39:21Z) - Single-step Diffusion for Image Compression at Ultra-Low Bitrates [19.76457078979179]
We propose a single-step diffusion model for image compression that delivers high perceptual quality and fast decoding at ultra-lows.<n>Our approach incorporates two key innovations: (i) Vector-Quantized Residual (VQ-Residual) training, which factorizes a structural base code and a learned residual in latent space.<n>Ours achieves comparable compression performance to state-of-the-art methods while improving decoding speed by about 50x.
arXiv Detail & Related papers (2025-06-19T19:53:27Z) - Embedding Compression Distortion in Video Coding for Machines [67.97469042910855]
Currently, video transmission serves not only the Human Visual System (HVS) for viewing but also machine perception for analysis.<n>We propose a Compression Distortion Embedding (CDRE) framework, which extracts machine-perception-related distortion representation and embeds it into downstream models.<n>Our framework can effectively boost the rate-task performance of existing codecs with minimal overhead in terms of execution time, and number of parameters.
arXiv Detail & Related papers (2025-03-27T13:01:53Z) - REGEN: Learning Compact Video Embedding with (Re-)Generative Decoder [52.698595889988766]
We present a novel perspective on learning video embedders for generative modeling.<n>Rather than requiring an exact reproduction of an input video, an effective embedder should focus on visually plausible reconstructions.<n>We propose replacing the conventional encoder-decoder video embedder with an encoder-generator framework.
arXiv Detail & Related papers (2025-03-11T17:51:07Z) - Enhancing the Rate-Distortion-Perception Flexibility of Learned Image
Codecs with Conditional Diffusion Decoders [7.485128109817576]
We show that conditional diffusion models can lead to promising results in the generative compression task when used as a decoder.
In this paper, we show that conditional diffusion models can lead to promising results in the generative compression task when used as a decoder.
arXiv Detail & Related papers (2024-03-05T11:48:35Z) - MISC: Ultra-low Bitrate Image Semantic Compression Driven by Large Multimodal Model [78.4051835615796]
This paper proposes a method called Multimodal Image Semantic Compression.
It consists of an LMM encoder for extracting the semantic information of the image, a map encoder to locate the region corresponding to the semantic, an image encoder generates an extremely compressed bitstream, and a decoder reconstructs the image based on the above information.
It can achieve optimal consistency and perception results while saving perceptual 50%, which has strong potential applications in the next generation of storage and communication.
arXiv Detail & Related papers (2024-02-26T17:11:11Z) - Low-complexity Deep Video Compression with A Distributed Coding
Architecture [4.5885672744218]
Prevalent predictive coding-based video compression methods rely on a heavy encoder to reduce temporal redundancy.
Traditional distributed coding methods suffer from a substantial performance gap to predictive coding ones.
We propose the first end-to-end distributed deep video compression framework to improve rate-distortion performance.
arXiv Detail & Related papers (2023-03-21T05:34:04Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.