Joint Global and Local Hierarchical Priors for Learned Image Compression
- URL: http://arxiv.org/abs/2112.04487v1
- Date: Wed, 8 Dec 2021 06:17:37 GMT
- Title: Joint Global and Local Hierarchical Priors for Learned Image Compression
- Authors: Jun-Hyuk Kim, Byeongho Heo, and Jong-Seok Lee
- Abstract summary: Recently, learned image compression methods have shown superior performance compared to the traditional hand-crafted image codecs.
We propose a novel entropy model called Information Transformer (Informer) that exploits both local and global information in a content-dependent manner.
Our experiments demonstrate that Informer improves rate-distortion performance over the state-of-the-art methods on the Kodak and Tecnick datasets.
- Score: 30.44884350320053
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recently, learned image compression methods have shown superior performance
compared to the traditional hand-crafted image codecs including BPG. One of the
fundamental research directions in learned image compression is to develop
entropy models that accurately estimate the probability distribution of the
quantized latent representation. Like other vision tasks, most of the recent
learned entropy models are based on convolutional neural networks (CNNs).
However, CNNs have a limitation in modeling dependencies between distant
regions due to their nature of local connectivity, which can be a significant
bottleneck in image compression where reducing spatial redundancy is a key
point. To address this issue, we propose a novel entropy model called
Information Transformer (Informer) that exploits both local and global
information in a content-dependent manner using an attention mechanism. Our
experiments demonstrate that Informer improves rate-distortion performance over
the state-of-the-art methods on the Kodak and Tecnick datasets without the
quadratic computational complexity problem.
Related papers
- An Information-Theoretic Regularizer for Lossy Neural Image Compression [20.939331919455935]
Lossy image compression networks aim to minimize the latent entropy of images while adhering to specific distortion constraints.
We propose a novel structural regularization method for the neural image compression task by incorporating the negative conditional source entropy into the training objective.
arXiv Detail & Related papers (2024-11-23T05:19:27Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Multi-Context Dual Hyper-Prior Neural Image Compression [10.349258638494137]
We propose a Transformer-based nonlinear transform to efficiently capture both local and global information from the input image.
We also introduce a novel entropy model that incorporates two different hyperpriors to model cross-channel and spatial dependencies of the latent representation.
Our experiments show that our proposed framework performs better than the state-of-the-art methods in terms of rate-distortion performance.
arXiv Detail & Related papers (2023-09-19T17:44:44Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - The Devil Is in the Details: Window-based Attention for Image
Compression [58.1577742463617]
Most existing learned image compression models are based on Convolutional Neural Networks (CNNs)
In this paper, we study the effects of multiple kinds of attention mechanisms for local features learning, then introduce a more straightforward yet effective window-based local attention block.
The proposed window-based attention is very flexible which could work as a plug-and-play component to enhance CNN and Transformer models.
arXiv Detail & Related papers (2022-03-16T07:55:49Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Causal Contextual Prediction for Learned Image Compression [36.08393281509613]
We propose the concept of separate entropy coding to leverage a serial decoding process for causal contextual entropy prediction in the latent space.
A causal context model is proposed that separates the latents across channels and makes use of cross-channel relationships to generate highly informative contexts.
We also propose a causal global prediction model, which is able to find global reference points for accurate predictions of unknown points.
arXiv Detail & Related papers (2020-11-19T08:15:10Z) - Learning Accurate Entropy Model with Global Reference for Image
Compression [22.171750277528222]
We propose a novel Global Reference Model for image compression to leverage both the local and the global context information.
A by-product of this work is the innovation of a mean-shifting GDN module that further improves the performance.
arXiv Detail & Related papers (2020-10-16T11:27:46Z) - Learning Context-Based Non-local Entropy Modeling for Image Compression [140.64888994506313]
In this paper, we propose a non-local operation for context modeling by employing the global similarity within the context.
The entropy model is further adopted as the rate loss in a joint rate-distortion optimization.
Considering that the width of the transforms is essential in training low distortion models, we finally produce a U-Net block in the transforms to increase the width with manageable memory consumption and time complexity.
arXiv Detail & Related papers (2020-05-10T13:28:18Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.