Implicit Neural Representations for Image Compression
- URL: http://arxiv.org/abs/2112.04267v1
- Date: Wed, 8 Dec 2021 13:02:53 GMT
- Title: Implicit Neural Representations for Image Compression
- Authors: Yannick Str\"umpler, Janis Postels, Ren Yang, Luc van Gool, Federico
Tombari
- Abstract summary: Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
- Score: 103.78615661013623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently Implicit Neural Representations (INRs) gained attention as a novel
and effective representation for various data types. Thus far, prior work
mostly focused on optimizing their reconstruction performance. This work
investigates INRs from a novel perspective, i.e., as a tool for image
compression. To this end, we propose the first comprehensive compression
pipeline based on INRs including quantization, quantization-aware retraining
and entropy coding. Encoding with INRs, i.e. overfitting to a data sample, is
typically orders of magnitude slower. To mitigate this drawback, we leverage
meta-learned initializations based on MAML to reach the encoding in fewer
gradient updates which also generally improves rate-distortion performance of
INRs. We find that our approach to source compression with INRs vastly
outperforms similar prior work, is competitive with common compression
algorithms designed specifically for images and closes the gap to
state-of-the-art learned approaches based on Rate-Distortion Autoencoders.
Moreover, we provide an extensive ablation study on the importance of
individual components of our method which we hope facilitates future research
on this novel approach to image compression.
Related papers
- Streaming Neural Images [56.41827271721955]
Implicit Neural Representations (INRs) are a novel paradigm for signal representation that have attracted considerable interest for image compression.
In this work, we explore the critical yet overlooked limiting factors of INRs, such as computational cost, unstable performance, and robustness.
arXiv Detail & Related papers (2024-09-25T17:51:20Z) - UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation [59.3877309501938]
Implicit Neural Representation (INR) networks have shown remarkable versatility due to their flexible compression ratios.
We introduce a codebook containing frequency domain information as a prior input to the INR network.
This enhances the representational power of INR and provides distinctive conditioning for different image blocks.
arXiv Detail & Related papers (2024-05-27T05:52:13Z) - Neural Image Compression with Quantization Rectifier [7.097091519502871]
We develop a novel quantization (QR) method for image compression that leverages image feature correlation to mitigate the impact of quantization.
Our method designs a neural network architecture that predicts unquantized features from the quantized ones.
In evaluation, we integrate QR into state-of-the-art neural image codecs and compare enhanced models and baselines on the widely-used Kodak benchmark.
arXiv Detail & Related papers (2024-03-25T22:26:09Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Beyond Learned Metadata-based Raw Image Reconstruction [86.1667769209103]
Raw images have distinct advantages over sRGB images, e.g., linearity and fine-grained quantization levels.
They are not widely adopted by general users due to their substantial storage requirements.
We propose a novel framework that learns a compact representation in the latent space, serving as metadata.
arXiv Detail & Related papers (2023-06-21T06:59:07Z) - Reducing The Amortization Gap of Entropy Bottleneck In End-to-End Image
Compression [2.1485350418225244]
End-to-end deep trainable models are about to exceed the performance of the traditional handcrafted compression techniques on videos and images.
We propose a simple yet efficient instance-based parameterization method to reduce this amortization gap at a minor cost.
arXiv Detail & Related papers (2022-09-02T11:43:45Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.