Modeling Image Quantization Tradeoffs for Optimal Compression
- URL: http://arxiv.org/abs/2112.07207v1
- Date: Tue, 14 Dec 2021 07:35:22 GMT
- Title: Modeling Image Quantization Tradeoffs for Optimal Compression
- Authors: Johnathan Chiu
- Abstract summary: Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: All Lossy compression algorithms employ similar compression schemes --
frequency domain transform followed by quantization and lossless encoding
schemes. They target tradeoffs by quantizating high frequency data to increase
compression rates which come at the cost of higher image distortion. We propose
a new method of optimizing quantization tables using Deep Learning and a
minimax loss function that more accurately measures the tradeoffs between rate
and distortion parameters (RD) than previous methods. We design a convolutional
neural network (CNN) that learns a mapping between image blocks and
quantization tables in an unsupervised manner. By processing images across all
channels at once, we can achieve stronger performance by also measuring
tradeoffs in information loss between different channels. We initially target
optimization on JPEG images but feel that this can be expanded to any lossy
compressor.
Related papers
- Multi-Scale Invertible Neural Network for Wide-Range Variable-Rate Learned Image Compression [90.59962443790593]
In this paper, we present a variable-rate image compression model based on invertible transform to overcome limitations.
Specifically, we design a lightweight multi-scale invertible neural network, which maps the input image into multi-scale latent representations.
Experimental results demonstrate that the proposed method achieves state-of-the-art performance compared to existing variable-rate methods.
arXiv Detail & Related papers (2025-03-27T09:08:39Z) - A Fast Quantum Image Compression Algorithm based on Taylor Expansion [0.0]
In this study, we upgrade a quantum image compression algorithm within parameterized quantum circuits.
Our approach encodes image data as unitary operator parameters and applies the quantum compilation algorithm to emulate the encryption process.
Experimental results on benchmark images, including Lenna and Cameraman, show that our method achieves up to 86% reduction in the number of iterations.
arXiv Detail & Related papers (2025-02-15T06:03:49Z) - CALLIC: Content Adaptive Learning for Lossless Image Compression [64.47244912937204]
CALLIC sets a new state-of-the-art (SOTA) for learned lossless image compression.
We propose a content-aware autoregressive self-attention mechanism by leveraging convolutional gating operations.
During encoding, we decompose pre-trained layers, including depth-wise convolutions, using low-rank matrices and then adapt the incremental weights on testing image by Rate-guided Progressive Fine-Tuning (RPFT)
RPFT fine-tunes with gradually increasing patches that are sorted in descending order by estimated entropy, optimizing learning process and reducing adaptation time.
arXiv Detail & Related papers (2024-12-23T10:41:18Z) - Quantization-aware Matrix Factorization for Low Bit Rate Image Compression [8.009813033356478]
Lossy image compression is essential for efficient transmission and storage.
We introduce a quantization-aware matrix factorization (QMF) to develop a novel lossy image compression method.
Our method consistently outperforms JPEG at low bit rates below 0.25 bits per pixel (bpp) and remains comparable at higher bit rates.
arXiv Detail & Related papers (2024-08-22T19:08:08Z) - DeepHQ: Learned Hierarchical Quantizer for Progressive Deep Image Coding [27.875207681547074]
progressive image coding (PIC) aims to compress various qualities of images into a single bitstream.
Research on neural network (NN)-based PIC is in its early stages.
We propose an NN-based progressive coding method that firstly utilizes learned quantization step sizes via learning for each quantization layer.
arXiv Detail & Related papers (2024-08-22T06:32:53Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Progressive Learning with Visual Prompt Tuning for Variable-Rate Image
Compression [60.689646881479064]
We propose a progressive learning paradigm for transformer-based variable-rate image compression.
Inspired by visual prompt tuning, we use LPM to extract prompts for input images and hidden features at the encoder side and decoder side, respectively.
Our model outperforms all current variable image methods in terms of rate-distortion performance and approaches the state-of-the-art fixed image compression methods trained from scratch.
arXiv Detail & Related papers (2023-11-23T08:29:32Z) - Reducing The Amortization Gap of Entropy Bottleneck In End-to-End Image
Compression [2.1485350418225244]
End-to-end deep trainable models are about to exceed the performance of the traditional handcrafted compression techniques on videos and images.
We propose a simple yet efficient instance-based parameterization method to reduce this amortization gap at a minor cost.
arXiv Detail & Related papers (2022-09-02T11:43:45Z) - Lossy Compression with Gaussian Diffusion [28.930398810600504]
We describe a novel lossy compression approach called DiffC which is based on unconditional diffusion generative models.
We implement a proof of concept and find that it works surprisingly well despite the lack of an encoder transform.
We show that a flow-based reconstruction achieves a 3 dB gain over ancestral sampling at highs.
arXiv Detail & Related papers (2022-06-17T16:46:31Z) - Wavelet Feature Maps Compression for Image-to-Image CNNs [3.1542695050861544]
We propose a novel approach for high-resolution activation maps compression integrated with point-wise convolutions.
We achieve compression rates equivalent to 1-4bit activation quantization with relatively small and much more graceful degradation in performance.
arXiv Detail & Related papers (2022-05-24T20:29:19Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.