DeepFGS: Fine-Grained Scalable Coding for Learned Image Compression
- URL: http://arxiv.org/abs/2201.01173v1
- Date: Tue, 4 Jan 2022 15:03:13 GMT
- Title: DeepFGS: Fine-Grained Scalable Coding for Learned Image Compression
- Authors: Yi Ma, Yongqi Zhai and Ronggang Wang
- Abstract summary: We propose the first learned fine-grained scalable image compression model (DeepFGS)
In this paper, we introduce a feature separation backbone to divide the image information into basic and scalable features, then redistribute the features channel by channel through an information rearrangement strategy.
Experiments demonstrate that our DeepFGS outperforms all learning-based scalable image compression models and conventional scalable image codecs in PSNR and MS-SSIM metrics.
- Score: 22.933872281183497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scalable coding, which can adapt to channel bandwidth variation, performs
well in today's complex network environment. However, the existing scalable
compression methods face two challenges: reduced compression performance and
insufficient scalability. In this paper, we propose the first learned
fine-grained scalable image compression model (DeepFGS) to overcome the above
two shortcomings. Specifically, we introduce a feature separation backbone to
divide the image information into basic and scalable features, then
redistribute the features channel by channel through an information
rearrangement strategy. In this way, we can generate a continuously scalable
bitstream via one-pass encoding. In addition, we reuse the decoder to reduce
the parameters and computational complexity of DeepFGS. Experiments demonstrate
that our DeepFGS outperforms all learning-based scalable image compression
models and conventional scalable image codecs in PSNR and MS-SSIM metrics. To
the best of our knowledge, our DeepFGS is the first exploration of learned
fine-grained scalable coding, which achieves the finest scalability compared
with learning-based methods.
Related papers
- CALLIC: Content Adaptive Learning for Lossless Image Compression [64.47244912937204]
CALLIC sets a new state-of-the-art (SOTA) for learned lossless image compression.
We propose a content-aware autoregressive self-attention mechanism by leveraging convolutional gating operations.
During encoding, we decompose pre-trained layers, including depth-wise convolutions, using low-rank matrices and then adapt the incremental weights on testing image by Rate-guided Progressive Fine-Tuning (RPFT)
RPFT fine-tunes with gradually increasing patches that are sorted in descending order by estimated entropy, optimizing learning process and reducing adaptation time.
arXiv Detail & Related papers (2024-12-23T10:41:18Z) - DeepFGS: Fine-Grained Scalable Coding for Learned Image Compression [27.834491128701963]
This paper proposes a learned fine-grained scalable image compression framework, namely DeepFGS.
For entropy coding, we design a mutual entropy model to fully explore the correlation between the basic and scalable features.
Experiments demonstrate that our proposed DeepFGS outperforms previous learning-based scalable image compression models.
arXiv Detail & Related papers (2024-11-30T11:19:38Z) - Large Language Models for Lossless Image Compression: Next-Pixel Prediction in Language Space is All You Need [53.584140947828004]
Language large model (LLM) with unprecedented intelligence is a general-purpose lossless compressor for various data modalities.
We propose P$2$-LLM, a next-pixel prediction-based LLM, which integrates various elaborated insights and methodologies.
Experiments on benchmark datasets demonstrate that P$2$-LLM can beat SOTA classical and learned codecs.
arXiv Detail & Related papers (2024-11-19T12:15:40Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Exploring Resolution Fields for Scalable Image Compression with
Uncertainty Guidance [47.96024424475888]
In this work, we explore the potential of resolution fields in scalable image compression.
We propose the reciprocal pyramid network (RPN) that fulfills the need for more adaptable and versatile compression.
Experiments show the superiority of RPN against existing classical and deep learning-based scalable codecs.
arXiv Detail & Related papers (2023-06-15T08:26:24Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Enhanced Invertible Encoding for Learned Image Compression [40.21904131503064]
In this paper, we propose an enhanced Invertible.
Network with invertible neural networks (INNs) to largely mitigate the information loss problem for better compression.
Experimental results on the Kodak, CLIC, and Tecnick datasets show that our method outperforms the existing learned image compression methods.
arXiv Detail & Related papers (2021-08-08T17:32:10Z) - How to Exploit the Transferability of Learned Image Compression to
Conventional Codecs [25.622863999901874]
We show how learned image coding can be used as a surrogate to optimize an image for encoding.
Our approach can remodel a conventional image to adjust for the MS-SSIM distortion with over 20% rate improvement without any decoding overhead.
arXiv Detail & Related papers (2020-12-03T12:34:51Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.