Post-Training Quantization for Cross-Platform Learned Image Compression
- URL: http://arxiv.org/abs/2202.07513v1
- Date: Tue, 15 Feb 2022 15:41:12 GMT
- Title: Post-Training Quantization for Cross-Platform Learned Image Compression
- Authors: Dailan He, Ziming Yang, Yuan Chen, Qi Zhang, Hongwei Qin, Yan Wang
- Abstract summary: It has been witnessed that learned image compression has outperformed conventional image coding techniques.
One of the most critical issues that need to be considered is the non-deterministic calculation.
We propose to solve this problem by introducing well-developed post-training quantization.
- Score: 15.67527732099067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has been witnessed that learned image compression has outperformed
conventional image coding techniques and tends to be practical in industrial
applications. One of the most critical issues that need to be considered is the
non-deterministic calculation, which makes the probability prediction
cross-platform inconsistent and frustrates successful decoding. We propose to
solve this problem by introducing well-developed post-training quantization and
making the model inference integer-arithmetic-only, which is much simpler than
presently existing training and fine-tuning based approaches yet still keeps
the superior rate-distortion performance of learned image compression. Based on
that, we further improve the discretization of the entropy parameters and
extend the deterministic inference to fit Gaussian mixture models. With our
proposed methods, the current state-of-the-art image compression models can
infer in a cross-platform consistent manner, which makes the further
development and practice of learned image compression more promising.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Zero-Shot Image Compression with Diffusion-Based Posterior Sampling [34.50287066865267]
This work addresses the gap by harnessing the image prior learned by existing pre-trained diffusion models for solving the task of lossy image compression.
Our method, PSC (Posterior Sampling-based Compression), utilizes zero-shot diffusion-based posterior samplers.
PSC achieves competitive results compared to established methods, paving the way for further exploration of pre-trained diffusion models and posterior samplers for image compression.
arXiv Detail & Related papers (2024-07-13T14:24:22Z) - A Training-Free Defense Framework for Robust Learned Image Compression [48.41990144764295]
We study the robustness of learned image compression models against adversarial attacks.
We present a training-free defense technique based on simple image transform functions.
arXiv Detail & Related papers (2024-01-22T12:50:21Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Reducing The Amortization Gap of Entropy Bottleneck In End-to-End Image
Compression [2.1485350418225244]
End-to-end deep trainable models are about to exceed the performance of the traditional handcrafted compression techniques on videos and images.
We propose a simple yet efficient instance-based parameterization method to reduce this amortization gap at a minor cost.
arXiv Detail & Related papers (2022-09-02T11:43:45Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Countering Adversarial Examples: Combining Input Transformation and
Noisy Training [15.561916630351947]
adversarial examples pose a threat to security-sensitive image recognition task.
Traditional JPEG compression is insufficient to defend those attacks but can cause an abrupt accuracy decline to benign images.
We make modifications to traditional JPEG compression algorithm which becomes more favorable for NN.
arXiv Detail & Related papers (2021-06-25T02:46:52Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.