FLLIC: Functionally Lossless Image Compression
- URL: http://arxiv.org/abs/2401.13616v2
- Date: Sun, 26 May 2024 07:28:50 GMT
- Title: FLLIC: Functionally Lossless Image Compression
- Authors: Xi Zhang, Xiaolin Wu,
- Abstract summary: We propose a new paradigm of joint denoising and compression called functionally lossless image compression (FLLIC)
FLLIC achieves state-of-the-art performance in joint denoising and compression of noisy images and does so at a lower computational cost.
- Score: 16.892815659154053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, DNN models for lossless image coding have surpassed their traditional counterparts in compression performance, reducing the bit rate by about ten percent for natural color images. But even with these advances, mathematically lossless image compression (MLLIC) ratios for natural images still fall short of the bandwidth and cost-effectiveness requirements of most practical imaging and vision systems at present and beyond. To break the bottleneck of MLLIC in compression performance, we question the necessity of MLLIC, as almost all digital sensors inherently introduce acquisition noises, making mathematically lossless compression counterproductive. Therefore, in contrast to MLLIC, we propose a new paradigm of joint denoising and compression called functionally lossless image compression (FLLIC), which performs lossless compression of optimally denoised images (the optimality may be task-specific). Although not literally lossless with respect to the noisy input, FLLIC aims to achieve the best possible reconstruction of the latent noise-free original image. Extensive experiments show that FLLIC achieves state-of-the-art performance in joint denoising and compression of noisy images and does so at a lower computational cost.
Related papers
- Large Language Models for Lossless Image Compression: Next-Pixel Prediction in Language Space is All You Need [53.584140947828004]
Language large model (LLM) with unprecedented intelligence is a general-purpose lossless compressor for various data modalities.
We propose P$2$-LLM, a next-pixel prediction-based LLM, which integrates various elaborated insights and methodologies.
Experiments on benchmark datasets demonstrate that P$2$-LLM can beat SOTA classical and learned codecs.
arXiv Detail & Related papers (2024-11-19T12:15:40Z) - Streaming Neural Images [56.41827271721955]
Implicit Neural Representations (INRs) are a novel paradigm for signal representation that have attracted considerable interest for image compression.
In this work, we explore the critical yet overlooked limiting factors of INRs, such as computational cost, unstable performance, and robustness.
arXiv Detail & Related papers (2024-09-25T17:51:20Z) - Joint End-to-End Image Compression and Denoising: Leveraging Contrastive
Learning and Multi-Scale Self-ONNs [18.71504105967766]
Noisy images are a challenge to image compression algorithms due to the inherent difficulty of compressing noise.
We propose a novel method integrating a multi-scale denoiser comprising of Self Organizing Operational Neural Networks, for joint image compression and denoising.
arXiv Detail & Related papers (2024-02-08T11:33:16Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Make Lossy Compression Meaningful for Low-Light Images [26.124632089007523]
We propose a novel joint solution to simultaneously achieve a high compression rate and good enhancement performance for low-light images.
We design an end-to-end trainable architecture, which includes the main enhancement branch and the signal-to-noise ratio (SNR) aware branch.
arXiv Detail & Related papers (2023-05-24T11:14:40Z) - Improving Multi-generation Robustness of Learned Image Compression [16.86614420872084]
We show that LIC can achieve comparable performance to the first compression of BPG even after 50 times reencoding without any change of the network structure.
arXiv Detail & Related papers (2022-10-31T03:26:11Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Optimizing Image Compression via Joint Learning with Denoising [49.83680496296047]
High levels of noise usually exist in today's captured images due to the relatively small sensors equipped in the smartphone cameras.
We propose a novel two-branch, weight-sharing architecture with plug-in feature denoisers to allow a simple and effective realization of the goal with little computational cost.
arXiv Detail & Related papers (2022-07-22T04:23:01Z) - Learning Scalable $\ell_\infty$-constrained Near-lossless Image
Compression via Joint Lossy Image and Residual Compression [118.89112502350177]
We propose a novel framework for learning $ell_infty$-constrained near-lossless image compression.
We derive the probability model of the quantized residual by quantizing the learned probability model of the original residual.
arXiv Detail & Related papers (2021-03-31T11:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.