Checkerboard Context Model for Efficient Learned Image Compression
- URL: http://arxiv.org/abs/2103.15306v2
- Date: Thu, 1 Apr 2021 08:33:32 GMT
- Title: Checkerboard Context Model for Efficient Learned Image Compression
- Authors: Dailan He, Yaoyan Zheng, Baocheng Sun, Yan Wang, Hongwei Qin
- Abstract summary: For learned image compression, the autoregressive context model is proved effective in improving the rate-distortion (RD) performance.
We propose a parallelizable checkerboard context model (CCM) to solve the problem.
Speeding up the decoding process more than 40 times in our experiments, it significantly improved computational efficiency with almost the same rate-distortion performance.
- Score: 6.376339829493938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For learned image compression, the autoregressive context model is proved
effective in improving the rate-distortion (RD) performance. Because it helps
remove spatial redundancies among latent representations. However, the decoding
process must be done in a strict scan order, which breaks the parallelization.
We propose a parallelizable checkerboard context model (CCM) to solve the
problem. Our two-pass checkerboard context calculation eliminates such
limitations on spatial locations by re-organizing the decoding order. Speeding
up the decoding process more than 40 times in our experiments, it achieves
significantly improved computational efficiency with almost the same
rate-distortion performance. To the best of our knowledge, this is the first
exploration on parallelization-friendly spatial context model for learned image
compression.
Related papers
- AdaBM: On-the-Fly Adaptive Bit Mapping for Image Super-Resolution [53.23803932357899]
We introduce the first on-the-fly adaptive quantization framework that accelerates the processing time from hours to seconds.
We achieve competitive performance with the previous adaptive quantization methods, while the processing time is accelerated by x2000.
arXiv Detail & Related papers (2024-04-04T08:37:27Z) - Corner-to-Center Long-range Context Model for Efficient Learned Image
Compression [70.0411436929495]
In the framework of learned image compression, the context model plays a pivotal role in capturing the dependencies among latent representations.
We propose the textbfCorner-to-Center transformer-based Context Model (C$3$M) designed to enhance context and latent predictions.
In addition, to enlarge the receptive field in the analysis and synthesis transformation, we use the Long-range Crossing Attention Module (LCAM) in the encoder/decoder.
arXiv Detail & Related papers (2023-11-29T21:40:28Z) - Efficient Contextformer: Spatio-Channel Window Attention for Fast
Context Modeling in Learned Image Compression [1.9249287163937978]
We introduce the Efficient Contextformer (eContextformer) - a transformer-based autoregressive context model for learned image.
It fuses patch-wise, checkered, and channel-wise grouping techniques for parallel context modeling.
It achieves 145x lower model complexity and 210Cx faster decoding speed, and higher average bit savings on Kodak, CLI, and Tecnick datasets.
arXiv Detail & Related papers (2023-06-25T16:29:51Z) - Multistage Spatial Context Models for Learned Image Compression [19.15884180604451]
We present a series of multistage spatial context models allowing both fast decoding and better RD performance.
The proposed method features a comparable decoding speed to Checkerboard while reaching the RD performance of Autoregressive.
arXiv Detail & Related papers (2023-02-18T08:55:54Z) - {\mu}Split: efficient image decomposition for microscopy data [50.794670705085835]
muSplit is a dedicated approach for trained image decomposition in the context of fluorescence microscopy images.
We introduce lateral contextualization (LC), a novel meta-architecture that enables the memory efficient incorporation of large image-context.
We apply muSplit to five decomposition tasks, one on a synthetic dataset, four others derived from real microscopy data.
arXiv Detail & Related papers (2022-11-23T11:26:24Z) - Asymmetric Learned Image Compression with Multi-Scale Residual Block,
Importance Map, and Post-Quantization Filtering [15.056672221375104]
Deep learning-based image compression has achieved better ratedistortion (R-D) performance than the latest traditional method, H.266/VVC.
Many leading learned schemes cannot maintain a good trade-off between performance and complexity.
We propose an effcient and effective image coding framework, which achieves similar R-D performance with lower complexity than the state of the art.
arXiv Detail & Related papers (2022-06-21T09:34:29Z) - ELIC: Efficient Learned Image Compression with Unevenly Grouped
Space-Channel Contextual Adaptive Coding [9.908820641439368]
We propose an efficient model, ELIC, to achieve state-of-the-art speed and compression ability.
With superior performance, the proposed model also supports extremely fast preview decoding and progressive decoding.
arXiv Detail & Related papers (2022-03-21T11:19:50Z) - An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy
Image Compression Systems [73.48927855855219]
Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark.
In this paper, we perform the first large-scale comparison of recent state-of-the-art hybrid neural compression algorithms.
arXiv Detail & Related papers (2022-01-27T19:47:51Z) - Learning True Rate-Distortion-Optimization for End-To-End Image
Compression [59.816251613869376]
Rate-distortion optimization is crucial part of traditional image and video compression.
In this paper, we enhance the training by introducing low-complexity estimations of the RDO result into the training.
We achieve average rate savings of 19.6% in MS-SSIM over the previous RDONet model, which equals rate savings of 27.3% over a comparable conventional deep image coder.
arXiv Detail & Related papers (2022-01-05T13:02:00Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.