DeepFGS: Fine-Grained Scalable Coding for Learned Image Compression
- URL: http://arxiv.org/abs/2201.01173v1
- Date: Tue, 4 Jan 2022 15:03:13 GMT
- Title: DeepFGS: Fine-Grained Scalable Coding for Learned Image Compression
- Authors: Yi Ma, Yongqi Zhai and Ronggang Wang
- Abstract summary: We propose the first learned fine-grained scalable image compression model (DeepFGS)
In this paper, we introduce a feature separation backbone to divide the image information into basic and scalable features, then redistribute the features channel by channel through an information rearrangement strategy.
Experiments demonstrate that our DeepFGS outperforms all learning-based scalable image compression models and conventional scalable image codecs in PSNR and MS-SSIM metrics.
- Score: 22.933872281183497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scalable coding, which can adapt to channel bandwidth variation, performs
well in today's complex network environment. However, the existing scalable
compression methods face two challenges: reduced compression performance and
insufficient scalability. In this paper, we propose the first learned
fine-grained scalable image compression model (DeepFGS) to overcome the above
two shortcomings. Specifically, we introduce a feature separation backbone to
divide the image information into basic and scalable features, then
redistribute the features channel by channel through an information
rearrangement strategy. In this way, we can generate a continuously scalable
bitstream via one-pass encoding. In addition, we reuse the decoder to reduce
the parameters and computational complexity of DeepFGS. Experiments demonstrate
that our DeepFGS outperforms all learning-based scalable image compression
models and conventional scalable image codecs in PSNR and MS-SSIM metrics. To
the best of our knowledge, our DeepFGS is the first exploration of learned
fine-grained scalable coding, which achieves the finest scalability compared
with learning-based methods.
Related papers
- You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Exploring Resolution Fields for Scalable Image Compression with
Uncertainty Guidance [47.96024424475888]
In this work, we explore the potential of resolution fields in scalable image compression.
We propose the reciprocal pyramid network (RPN) that fulfills the need for more adaptable and versatile compression.
Experiments show the superiority of RPN against existing classical and deep learning-based scalable codecs.
arXiv Detail & Related papers (2023-06-15T08:26:24Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Asymmetric Learned Image Compression with Multi-Scale Residual Block,
Importance Map, and Post-Quantization Filtering [15.056672221375104]
Deep learning-based image compression has achieved better ratedistortion (R-D) performance than the latest traditional method, H.266/VVC.
Many leading learned schemes cannot maintain a good trade-off between performance and complexity.
We propose an effcient and effective image coding framework, which achieves similar R-D performance with lower complexity than the state of the art.
arXiv Detail & Related papers (2022-06-21T09:34:29Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Enhanced Invertible Encoding for Learned Image Compression [40.21904131503064]
In this paper, we propose an enhanced Invertible.
Network with invertible neural networks (INNs) to largely mitigate the information loss problem for better compression.
Experimental results on the Kodak, CLIC, and Tecnick datasets show that our method outperforms the existing learned image compression methods.
arXiv Detail & Related papers (2021-08-08T17:32:10Z) - How to Exploit the Transferability of Learned Image Compression to
Conventional Codecs [25.622863999901874]
We show how learned image coding can be used as a surrogate to optimize an image for encoding.
Our approach can remodel a conventional image to adjust for the MS-SSIM distortion with over 20% rate improvement without any decoding overhead.
arXiv Detail & Related papers (2020-12-03T12:34:51Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.