Context-Based Trit-Plane Coding for Progressive Image Compression
- URL: http://arxiv.org/abs/2303.05715v2
- Date: Mon, 13 Mar 2023 07:09:03 GMT
- Title: Context-Based Trit-Plane Coding for Progressive Image Compression
- Authors: Seungmin Jeon, Kwang Pyo Choi, Youngo Park and Chang-Su Kim
- Abstract summary: Trit-plane coding enables deep progressive image compression, but it cannot use autoregressive context models.
We develop the context-based rate reduction module to estimate trit probabilities of latent elements accurately.
Second, we develop the context-based distortion reduction module to refine partial latent tensors from the trit-planes.
Third, we propose a retraining scheme for the decoder to attain better rate-distortion tradeoffs.
- Score: 31.396712329965005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trit-plane coding enables deep progressive image compression, but it cannot
use autoregressive context models. In this paper, we propose the context-based
trit-plane coding (CTC) algorithm to achieve progressive compression more
compactly. First, we develop the context-based rate reduction module to
estimate trit probabilities of latent elements accurately and thus encode the
trit-planes compactly. Second, we develop the context-based distortion
reduction module to refine partial latent tensors from the trit-planes and
improve the reconstructed image quality. Third, we propose a retraining scheme
for the decoder to attain better rate-distortion tradeoffs. Extensive
experiments show that CTC outperforms the baseline trit-plane codec
significantly in BD-rate on the Kodak lossless dataset, while increasing the
time complexity only marginally. Our codes are available at
https://github.com/seungminjeon-github/CTC.
Related papers
- Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Cross Modal Compression: Towards Human-comprehensible Semantic
Compression [73.89616626853913]
Cross modal compression is a semantic compression framework for visual data.
We show that our proposed CMC can achieve encouraging reconstructed results with an ultrahigh compression ratio.
arXiv Detail & Related papers (2022-09-06T15:31:11Z) - Asymmetric Learned Image Compression with Multi-Scale Residual Block,
Importance Map, and Post-Quantization Filtering [15.056672221375104]
Deep learning-based image compression has achieved better ratedistortion (R-D) performance than the latest traditional method, H.266/VVC.
Many leading learned schemes cannot maintain a good trade-off between performance and complexity.
We propose an effcient and effective image coding framework, which achieves similar R-D performance with lower complexity than the state of the art.
arXiv Detail & Related papers (2022-06-21T09:34:29Z) - RD-Optimized Trit-Plane Coding of Deep Compressed Image Latent Tensors [40.86513649546442]
DPICT is the first learning-based image supporting fine granular scalability.
In this paper, we describe how to implement two key components of DPICT efficiently: trit-plane slicing and RD-prioritized transmission.
arXiv Detail & Related papers (2022-03-25T06:33:16Z) - DPICT: Deep Progressive Image Compression Using Trit-Planes [36.34865777731784]
Deep progressive image compression using trit-planes (DPICT) algorithm.
We transform an image into a latent tensor using an analysis network.
We encode it into a compressed bitstream trit-plane by trit-plane in the decreasing order of significance.
arXiv Detail & Related papers (2021-12-12T22:09:33Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Checkerboard Context Model for Efficient Learned Image Compression [6.376339829493938]
For learned image compression, the autoregressive context model is proved effective in improving the rate-distortion (RD) performance.
We propose a parallelizable checkerboard context model (CCM) to solve the problem.
Speeding up the decoding process more than 40 times in our experiments, it significantly improved computational efficiency with almost the same rate-distortion performance.
arXiv Detail & Related papers (2021-03-29T03:25:41Z) - MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models [78.93424358827528]
We present a novel compression algorithm for reducing the storage streams of LiDAR sensor data.
Our method significantly reduces the joint geometry and intensity over prior state-of-the-art LiDAR compression methods.
arXiv Detail & Related papers (2020-11-15T17:41:14Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - OctSqueeze: Octree-Structured Entropy Model for LiDAR Compression [77.8842824702423]
We present a novel deep compression algorithm to reduce the memory footprint of LiDAR point clouds.
Our method exploits the sparsity and structural redundancy between points to reduce the memory footprint.
Our algorithm can be used to reduce the onboard and offboard storage of LiDAR points for applications such as self-driving cars.
arXiv Detail & Related papers (2020-05-14T17:48:49Z) - Deep Learning-based Image Compression with Trellis Coded Quantization [13.728517700074423]
We propose to incorporate trellis coded quantizer (TCQ) into a deep learning based image compression framework.
A soft-to-hard strategy is applied to allow for back propagation during training.
We develop a simple image compression model that consists of threeworks (encoder, decoder and entropy estimation) and optimize all of the components in an end-to-end manner.
arXiv Detail & Related papers (2020-01-26T08:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.