Lossless Image Compression Using a Multi-Scale Progressive Statistical
Model
- URL: http://arxiv.org/abs/2108.10551v1
- Date: Tue, 24 Aug 2021 07:33:13 GMT
- Title: Lossless Image Compression Using a Multi-Scale Progressive Statistical
Model
- Authors: Honglei Zhang, Francesco Cricri, Hamed R. Tavakoli, Nannan Zou, Emre
Aksu, Miska M. Hannuksela
- Abstract summary: Methods based on pixel-wise autoregressive statistical models have shown good performance.
We propose a multi-scale progressive statistical model that takes advantage of the pixel-wise approach and the multi-scale approach.
- Score: 16.58692559039154
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Lossless image compression is an important technique for image storage and
transmission when information loss is not allowed. With the fast development of
deep learning techniques, deep neural networks have been used in this field to
achieve a higher compression rate. Methods based on pixel-wise autoregressive
statistical models have shown good performance. However, the sequential
processing way prevents these methods to be used in practice. Recently,
multi-scale autoregressive models have been proposed to address this
limitation. Multi-scale approaches can use parallel computing systems
efficiently and build practical systems. Nevertheless, these approaches
sacrifice compression performance in exchange for speed. In this paper, we
propose a multi-scale progressive statistical model that takes advantage of the
pixel-wise approach and the multi-scale approach. We developed a flexible
mechanism where the processing order of the pixels can be adjusted easily. Our
proposed method outperforms the state-of-the-art lossless image compression
methods on two large benchmark datasets by a significant margin without
degrading the inference speed dramatically.
Related papers
- SpotDiffusion: A Fast Approach For Seamless Panorama Generation Over Time [7.532695984765271]
We present a novel approach to generate high-resolution images with generative models.
Our method shifts non-overlapping denoising windows over time, ensuring that seams in one timestep are corrected in the next.
Our method offers several key benefits, including improved computational efficiency and faster inference times.
arXiv Detail & Related papers (2024-07-22T09:44:35Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Reducing The Amortization Gap of Entropy Bottleneck In End-to-End Image
Compression [2.1485350418225244]
End-to-end deep trainable models are about to exceed the performance of the traditional handcrafted compression techniques on videos and images.
We propose a simple yet efficient instance-based parameterization method to reduce this amortization gap at a minor cost.
arXiv Detail & Related papers (2022-09-02T11:43:45Z) - Post-Training Quantization for Cross-Platform Learned Image Compression [15.67527732099067]
It has been witnessed that learned image compression has outperformed conventional image coding techniques.
One of the most critical issues that need to be considered is the non-deterministic calculation.
We propose to solve this problem by introducing well-developed post-training quantization.
arXiv Detail & Related papers (2022-02-15T15:41:12Z) - Compressed Smooth Sparse Decomposition [3.8644240125444]
We propose a fast and data-efficient method with theoretical performance guarantee for sparse anomaly detection in images.
The proposed method, named Compressed Smooth Sparse Decomposition (CSSD), is a one-step method that unifies the compressive image acquisition and decomposition-based image processing techniques.
Compared to traditional smooth and sparse decomposition algorithms, significant transmission cost reduction and computational speed boost can be achieved with negligible performance loss.
arXiv Detail & Related papers (2022-01-19T03:50:41Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - PixelPyramids: Exact Inference Models from Lossless Image Pyramids [58.949070311990916]
Pixel-Pyramids is a block-autoregressive approach with scale-specific representations to encode the joint distribution of image pixels.
It yields state-of-the-art results for density estimation on various image datasets, especially for high-resolution data.
For CelebA-HQ 1024 x 1024, we observe that the density estimates are improved to 44% of the baseline despite sampling speeds superior even to easily parallelizable flow-based models.
arXiv Detail & Related papers (2021-10-17T10:47:29Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.