RAGE for the Machine: Image Compression with Low-Cost Random Access for
Embedded Applications
- URL: http://arxiv.org/abs/2402.05974v1
- Date: Wed, 7 Feb 2024 19:28:33 GMT
- Title: RAGE for the Machine: Image Compression with Low-Cost Random Access for
Embedded Applications
- Authors: Christian D. Rask, Daniel E. Lucani
- Abstract summary: RAGE is an image compression framework that achieves four generally conflicting objectives.
We show that RAGE has similar or better compression ratios to state-of-the-art lossless image compressors.
Our measurements also show that RAGE's lossy variant, RAGE-Q, outperforms JPEG by several fold in terms of distortion in embedded graphics.
- Score: 5.199703527082964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce RAGE, an image compression framework that achieves four
generally conflicting objectives: 1) good compression for a wide variety of
color images, 2) computationally efficient, fast decompression, 3) fast random
access of images with pixel-level granularity without the need to decompress
the entire image, 4) support for both lossless and lossy compression. To
achieve these, we rely on the recent concept of generalized deduplication (GD),
which is known to provide efficient lossless (de)compression and fast random
access in time-series data, and deliver key expansions suitable for image
compression, both lossless and lossy. Using nine different datasets, incl.
graphics, logos, natural images, we show that RAGE has similar or better
compression ratios to state-of-the-art lossless image compressors, while
delivering pixel-level random access capabilities. Tests in an ARM Cortex-M33
platform show seek times between 9.9 and 40.6~ns and average decoding time per
pixel between 274 and 1226~ns. Our measurements also show that RAGE's lossy
variant, RAGE-Q, outperforms JPEG by several fold in terms of distortion in
embedded graphics and has reasonable compression and distortion for natural
images.
Related papers
- Learned Image Compression for HE-stained Histopathological Images via Stain Deconvolution [33.69980388844034]
In this paper, we show that the commonly used JPEG algorithm is not best suited for further compression.
We propose Stain Quantized Latent Compression, a novel DL based histopathology data compression approach.
We show that our approach yields superior performance in a classification downstream task, compared to traditional approaches like JPEG.
arXiv Detail & Related papers (2024-06-18T13:47:17Z) - Lossless Image Compression Using Multi-level Dictionaries: Binary Images [2.2940141855172036]
Lossless image compression is required in various applications to reduce storage or transmission costs of images.
We argue that compressibility of a color image is essentially derived from the patterns in its spatial structure.
The proposed scheme first learns dictionaries of $16times16$, $8times8$, $4times4$, and $2times 2$ square pixel patterns from various datasets of binary images.
arXiv Detail & Related papers (2024-06-05T09:24:10Z) - Unified learning-based lossy and lossless JPEG recompression [15.922937139019547]
We propose a unified lossly and lossless JPEG recompression framework, which consists of learned quantization table and Markovian hierarchical variational autoencoders.
Experiments show that our method can achieve arbitrarily low distortion when the JPEG is close to the upper bound.
arXiv Detail & Related papers (2023-12-05T12:07:27Z) - Are Visual Recognition Models Robust to Image Compression? [23.280147529096908]
We analyze the impact of image compression on visual recognition tasks.
We consider a wide range of compression levels, ranging from 0.1 to 2 bits-per-pixel (bpp)
We find that for all three tasks, the recognition ability is significantly impacted when using strong compression.
arXiv Detail & Related papers (2023-04-10T11:30:11Z) - Learned Lossless Compression for JPEG via Frequency-Domain Prediction [50.20577108662153]
We propose a novel framework for learned lossless compression of JPEG images.
To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy.
An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients.
arXiv Detail & Related papers (2023-03-05T13:15:28Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - PILC: Practical Image Lossless Compression with an End-to-end GPU
Oriented Neural Framework [88.18310777246735]
We propose an end-to-end image compression framework that achieves 200 MB/s for both compression and decompression with a single NVIDIA Tesla V100 GPU.
Experiments show that our framework compresses better than PNG by a margin of 30% in multiple datasets.
arXiv Detail & Related papers (2022-06-10T03:00:10Z) - Learning Scalable $\ell_\infty$-constrained Near-lossless Image
Compression via Joint Lossy Image and Residual Compression [118.89112502350177]
We propose a novel framework for learning $ell_infty$-constrained near-lossless image compression.
We derive the probability model of the quantized residual by quantizing the learned probability model of the original residual.
arXiv Detail & Related papers (2021-03-31T11:53:36Z) - Towards Robust Data Hiding Against (JPEG) Compression: A
Pseudo-Differentiable Deep Learning Approach [78.05383266222285]
It is still an open challenge to achieve the goal of data hiding that can be against these compressions.
Deep learning has shown large success in data hiding, while non-differentiability of JPEG makes it challenging to train a deep pipeline for improving robustness against lossy compression.
In this work, we propose a simple yet effective approach to address all the above limitations at once.
arXiv Detail & Related papers (2020-12-30T12:30:09Z) - Learning Better Lossless Compression Using Lossy Compression [100.50156325096611]
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
We model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction.
Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder.
arXiv Detail & Related papers (2020-03-23T11:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.