Optimizing JPEG Quantization for Classification Networks
- URL: http://arxiv.org/abs/2003.02874v1
- Date: Thu, 5 Mar 2020 19:13:06 GMT
- Title: Optimizing JPEG Quantization for Classification Networks
- Authors: Zhijing Li, Christopher De Sa, Adrian Sampson
- Abstract summary: We show that a simple sorted random sampling method can exceed the performance of the standard JPEG Q-table.
New Q-tables can improve the compression rate by 10% to 200% when the accuracy is fixed, or improve accuracy up to $2%$ at the same compression rate.
- Score: 32.20485214224392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning for computer vision depends on lossy image compression: it
reduces the storage required for training and test data and lowers transfer
costs in deployment. Mainstream datasets and imaging pipelines all rely on
standard JPEG compression. In JPEG, the degree of quantization of frequency
coefficients controls the lossiness: an 8 by 8 quantization table (Q-table)
decides both the quality of the encoded image and the compression ratio. While
a long history of work has sought better Q-tables, existing work either seeks
to minimize image distortion or to optimize for models of the human visual
system. This work asks whether JPEG Q-tables exist that are "better" for
specific vision networks and can offer better quality--size trade-offs than
ones designed for human perception or minimal distortion. We reconstruct an
ImageNet test set with higher resolution to explore the effect of JPEG
compression under novel Q-tables. We attempt several approaches to tune a
Q-table for a vision task. We find that a simple sorted random sampling method
can exceed the performance of the standard JPEG Q-table. We also use
hyper-parameter tuning techniques including bounded random search, Bayesian
optimization, and composite heuristic optimization methods. The new Q-tables we
obtained can improve the compression rate by 10% to 200% when the accuracy is
fixed, or improve accuracy up to $2\%$ at the same compression rate.
Related papers
- Extreme Image Compression using Fine-tuned VQGANs [43.43014096929809]
We introduce vector quantization (VQ)-based generative models into the image compression domain.
The codebook learned by the VQGAN model yields a strong expressive capacity.
The proposed framework outperforms state-of-the-art codecs in terms of perceptual quality-oriented metrics.
arXiv Detail & Related papers (2023-07-17T06:14:19Z) - Metaheuristic-based Energy-aware Image Compression for Mobile App
Development [1.933681537640272]
We propose a novel objective function for population-based JPEG image compression.
Second, to tackle the lack of comprehensive coverage, we suggest a novel representation.
Third, we provide a comprehensive benchmark on 22 state-of-the-art and recently-introduced PBMH algorithms.
arXiv Detail & Related papers (2022-12-13T01:39:47Z) - High-Perceptual Quality JPEG Decoding via Posterior Sampling [13.238373528922194]
We propose a different paradigm for JPEG artifact correction.
We aim to obtain sharp, detailed and visually reconstructed images, while being consistent with the compressed input.
Our solution offers a diverse set of plausible and fast reconstructions for a given input with perfect consistency.
arXiv Detail & Related papers (2022-11-21T19:47:59Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Variable-Rate Deep Image Compression through Spatially-Adaptive Feature
Transform [58.60004238261117]
We propose a versatile deep image compression network based on Spatial Feature Transform (SFT arXiv:1804.02815)
Our model covers a wide range of compression rates using a single model, which is controlled by arbitrary pixel-wise quality maps.
The proposed framework allows us to perform task-aware image compressions for various tasks.
arXiv Detail & Related papers (2021-08-21T17:30:06Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z) - Learning to Improve Image Compression without Changing the Standard
Decoder [100.32492297717056]
We propose learning to improve the encoding performance with the standard decoder.
Specifically, a frequency-domain pre-editing method is proposed to optimize the distribution of DCT coefficients.
We do not modify the JPEG decoder and therefore our approach is applicable when viewing images with the widely used standard JPEG decoder.
arXiv Detail & Related papers (2020-09-27T19:24:42Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.