Exploring the Rate-Distortion-Complexity Optimization in Neural Image
Compression
- URL: http://arxiv.org/abs/2305.07678v1
- Date: Fri, 12 May 2023 03:56:25 GMT
- Title: Exploring the Rate-Distortion-Complexity Optimization in Neural Image
Compression
- Authors: Yixin Gao, Runsen Feng, Zongyu Guo, Zhibo Chen
- Abstract summary: We study the rate-distortion-complexity (RDC) optimization in neural image compression.
By quantifying the decoding complexity as a factor in the optimization goal, we are now able to precisely control the RDC trade-off.
A variable-complexity neural is designed to leverage the spatial dependencies adaptively according to industrial demands.
- Score: 26.1947289647201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite a short history, neural image codecs have been shown to surpass
classical image codecs in terms of rate-distortion performance. However, most
of them suffer from significantly longer decoding times, which hinders the
practical applications of neural image codecs. This issue is especially
pronounced when employing an effective yet time-consuming autoregressive
context model since it would increase entropy decoding time by orders of
magnitude. In this paper, unlike most previous works that pursue optimal RD
performance while temporally overlooking the coding complexity, we make a
systematical investigation on the rate-distortion-complexity (RDC) optimization
in neural image compression. By quantifying the decoding complexity as a factor
in the optimization goal, we are now able to precisely control the RDC
trade-off and then demonstrate how the rate-distortion performance of neural
image codecs could adapt to various complexity demands. Going beyond the
investigation of RDC optimization, a variable-complexity neural codec is
designed to leverage the spatial dependencies adaptively according to
industrial demands, which supports fine-grained complexity adjustment by
balancing the RDC tradeoff. By implementing this scheme in a powerful base
model, we demonstrate the feasibility and flexibility of RDC optimization for
neural image codecs.
Related papers
- Neural Image Compression with Quantization Rectifier [7.097091519502871]
We develop a novel quantization (QR) method for image compression that leverages image feature correlation to mitigate the impact of quantization.
Our method designs a neural network architecture that predicts unquantized features from the quantized ones.
In evaluation, we integrate QR into state-of-the-art neural image codecs and compare enhanced models and baselines on the widely-used Kodak benchmark.
arXiv Detail & Related papers (2024-03-25T22:26:09Z) - An Efficient Implicit Neural Representation Image Codec Based on Mixed Autoregressive Model for Low-Complexity Decoding [43.43996899487615]
Implicit Neural Representation (INR) for image compression is an emerging technology that offers two key benefits compared to cutting-edge autoencoder models.
We introduce a new Mixed AutoRegressive Model (MARM) to significantly reduce the decoding time for the current INR.
MARM includes our proposed AutoRegressive Upsampler (ARU) blocks, which are highly efficient, and ARM from previous work to balance decoding time and reconstruction quality.
arXiv Detail & Related papers (2024-01-23T09:37:58Z) - ConvNeXt-ChARM: ConvNeXt-based Transform for Efficient Neural Image
Compression [18.05997169440533]
We propose ConvNeXt-ChARM, an efficient ConvNeXt-based transform coding framework, paired with a compute-efficient channel-wise auto-regressive auto-regressive.
We show that ConvNeXt-ChARM brings consistent and significant BD-rate (PSNR) reductions estimated on average to 5.24% and 1.22% over the versatile video coding (VVC) reference encoder (VTM-18.0) and the state-of-the-art learned image compression method SwinT-ChARM.
arXiv Detail & Related papers (2023-07-12T11:45:54Z) - Joint Hierarchical Priors and Adaptive Spatial Resolution for Efficient
Neural Image Compression [11.25130799452367]
We propose an absolute image compression transformer (ICT) for neural image compression (NIC)
ICT captures both global and local contexts from the latent representations and better parameterize the distribution of the quantized latents.
Our framework significantly improves the trade-off between coding efficiency and decoder complexity over the versatile video coding (VVC) reference encoder (VTM-18.0) and the neural SwinT-ChARM.
arXiv Detail & Related papers (2023-07-05T13:17:14Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - Neural Data-Dependent Transform for Learned Image Compression [72.86505042102155]
We build a neural data-dependent transform and introduce a continuous online mode decision mechanism to jointly optimize the coding efficiency for each individual image.
The experimental results show the effectiveness of the proposed neural-syntax design and the continuous online mode decision mechanism.
arXiv Detail & Related papers (2022-03-09T14:56:48Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Substitutional Neural Image Compression [48.20906717052056]
Substitutional Neural Image Compression (SNIC) is a general approach for enhancing any neural image compression model.
It boosts compression performance toward a flexible distortion metric and enables bit-rate control using a single model instance.
arXiv Detail & Related papers (2021-05-16T20:53:31Z) - Slimmable Compressive Autoencoders for Practical Neural Image
Compression [20.715312224456138]
We propose slimmable compressive autoencoders (SlimCAEs) for practical image compression.
SlimCAEs are highly flexible models that provide excellent rate-distortion performance, variable rate, and dynamic adjustment of memory, computational cost and latency.
arXiv Detail & Related papers (2021-03-29T16:12:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.