Deep Lossless Image Compression via Masked Sampling and Coarse-to-Fine Auto-Regression
- URL: http://arxiv.org/abs/2503.11231v1
- Date: Fri, 14 Mar 2025 09:29:55 GMT
- Title: Deep Lossless Image Compression via Masked Sampling and Coarse-to-Fine Auto-Regression
- Authors: Tiantian Li, Qunbing Xia, Yue Li, Ruixiao Guo, Gaobo Yang,
- Abstract summary: We propose a deep lossless image compression via masked sampling and coarse-to-fine auto-regression.<n>It combines lossy reconstruction and progressive residual compression, which fuses contexts from various directions.<n>Our method achieves comparable compression performance on extensive datasets with competitive coding speed and more flexibility.
- Score: 8.6984128323386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-based lossless image compression employs pixel-based or subimage-based auto-regression for probability estimation, which achieves desirable performances. However, the existing works only consider context dependencies in one direction, namely, those symbols that appear before the current symbol in raster order. We believe that the dependencies between the current and future symbols should be further considered. In this work, we propose a deep lossless image compression via masked sampling and coarse-to-fine auto-regression. It combines lossy reconstruction and progressive residual compression, which fuses contexts from various directions and is more consistent with human perception. Specifically, the residuals are decomposed via $T$ iterative masked sampling, and each sampling consists of three steps: 1) probability estimation, 2) mask computation, and 3) arithmetic coding. The iterative process progressively refines our prediction and gradually presents a real image. Extensive experimental results show that compared with the existing traditional and learned lossless compression, our method achieves comparable compression performance on extensive datasets with competitive coding speed and more flexibility.
Related papers
- CALLIC: Content Adaptive Learning for Lossless Image Compression [64.47244912937204]
CALLIC sets a new state-of-the-art (SOTA) for learned lossless image compression.<n>We propose a content-aware autoregressive self-attention mechanism by leveraging convolutional gating operations.<n>During encoding, we decompose pre-trained layers, including depth-wise convolutions, using low-rank matrices and then adapt the incremental weights on testing image by Rate-guided Progressive Fine-Tuning (RPFT)<n>RPFT fine-tunes with gradually increasing patches that are sorted in descending order by estimated entropy, optimizing learning process and reducing adaptation time.
arXiv Detail & Related papers (2024-12-23T10:41:18Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Self-Asymmetric Invertible Network for Compression-Aware Image Rescaling [6.861753163565238]
In real-world applications, most images are compressed for transmission.
We propose the Self-Asymmetric Invertible Network (SAIN) for compression-aware image rescaling.
arXiv Detail & Related papers (2023-03-04T08:33:46Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Learning Scalable $\ell_\infty$-constrained Near-lossless Image
Compression via Joint Lossy Image and Residual Compression [118.89112502350177]
We propose a novel framework for learning $ell_infty$-constrained near-lossless image compression.
We derive the probability model of the quantized residual by quantizing the learned probability model of the original residual.
arXiv Detail & Related papers (2021-03-31T11:53:36Z) - Improving Inference for Neural Image Compression [31.999462074510305]
State-of-the-art methods build on hierarchical variational autoencoders to predict a compressible latent representation of each data point.
We identify three approximation gaps which limit performance in the conventional approach.
We propose remedies for each of these three limitations based on ideas related to iterative inference.
arXiv Detail & Related papers (2020-06-07T19:26:37Z) - Saliency Driven Perceptual Image Compression [6.201592931432016]
The paper demonstrates that the popularly used evaluations metrics such as MS-SSIM and PSNR are inadequate for judging the performance of image compression techniques.
A new metric is proposed, which is learned on perceptual similarity data specific to image compression.
The model not only generates images which are visually better but also gives superior performance for subsequent computer vision tasks.
arXiv Detail & Related papers (2020-02-12T13:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.