Optimizing Image Compression via Joint Learning with Denoising
- URL: http://arxiv.org/abs/2207.10869v1
- Date: Fri, 22 Jul 2022 04:23:01 GMT
- Title: Optimizing Image Compression via Joint Learning with Denoising
- Authors: Ka Leong Cheng and Yueqi Xie and Qifeng Chen
- Abstract summary: High levels of noise usually exist in today's captured images due to the relatively small sensors equipped in the smartphone cameras.
We propose a novel two-branch, weight-sharing architecture with plug-in feature denoisers to allow a simple and effective realization of the goal with little computational cost.
- Score: 49.83680496296047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High levels of noise usually exist in today's captured images due to the
relatively small sensors equipped in the smartphone cameras, where the noise
brings extra challenges to lossy image compression algorithms. Without the
capacity to tell the difference between image details and noise, general image
compression methods allocate additional bits to explicitly store the undesired
image noise during compression and restore the unpleasant noisy image during
decompression. Based on the observations, we optimize the image compression
algorithm to be noise-aware as joint denoising and compression to resolve the
bits misallocation problem. The key is to transform the original noisy images
to noise-free bits by eliminating the undesired noise during compression, where
the bits are later decompressed as clean images. Specifically, we propose a
novel two-branch, weight-sharing architecture with plug-in feature denoisers to
allow a simple and effective realization of the goal with little computational
cost. Experimental results show that our method gains a significant improvement
over the existing baseline methods on both the synthetic and real-world
datasets. Our source code is available at
https://github.com/felixcheng97/DenoiseCompression.
Related papers
- MISC: Ultra-low Bitrate Image Semantic Compression Driven by Large Multimodal Model [78.4051835615796]
This paper proposes a method called Multimodal Image Semantic Compression.
It consists of an LMM encoder for extracting the semantic information of the image, a map encoder to locate the region corresponding to the semantic, an image encoder generates an extremely compressed bitstream, and a decoder reconstructs the image based on the above information.
It can achieve optimal consistency and perception results while saving perceptual 50%, which has strong potential applications in the next generation of storage and communication.
arXiv Detail & Related papers (2024-02-26T17:11:11Z) - Joint End-to-End Image Compression and Denoising: Leveraging Contrastive
Learning and Multi-Scale Self-ONNs [18.71504105967766]
Noisy images are a challenge to image compression algorithms due to the inherent difficulty of compressing noise.
We propose a novel method integrating a multi-scale denoiser comprising of Self Organizing Operational Neural Networks, for joint image compression and denoising.
arXiv Detail & Related papers (2024-02-08T11:33:16Z) - FLLIC: Functionally Lossless Image Compression [16.892815659154053]
We propose a new paradigm of joint denoising and compression called functionally lossless image compression (FLLIC)
FLLIC achieves state-of-the-art performance in joint denoising and compression of noisy images and does so at a lower computational cost.
arXiv Detail & Related papers (2024-01-24T17:44:33Z) - On the Importance of Denoising when Learning to Compress Images [34.99683302788977]
We propose to explicitly learn the image denoising task when training a.
We leverage the Natural Image Noise dataset, which offers a wide variety of scenes captured with various ISO numbers.
We show that a single model trained based on a mixture of images with variable noise levels appears to yield best-in-class results with both noisy and clean images.
arXiv Detail & Related papers (2023-07-12T15:26:04Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - A Unified Image Preprocessing Framework For Image Compression [5.813935823171752]
We propose a unified image compression preprocessing framework, called Kuchen, to improve the performance of existing codecs.
The framework consists of a hybrid data labeling system along with a learning-based backbone to simulate personalized preprocessing.
Results demonstrate that the modern codecs optimized by our unified preprocessing framework constantly improve the efficiency of the state-of-the-art compression.
arXiv Detail & Related papers (2022-08-15T10:41:00Z) - Analysis of the Effect of Low-Overhead Lossy Image Compression on the
Performance of Visual Crowd Counting for Smart City Applications [78.55896581882595]
Lossy image compression techniques can reduce the quality of the images, leading to accuracy degradation.
In this paper, we analyze the effect of applying low-overhead lossy image compression methods on the accuracy of visual crowd counting.
arXiv Detail & Related papers (2022-07-20T19:20:03Z) - Joint Image Compression and Denoising via Latent-Space Scalability [36.5211475555805]
We present a learnt image compression framework where image denoising and compression are performed jointly.
The proposed is compared against established compression and denoising benchmarks, and the experiments reveal considerable savings of up to 80%.
arXiv Detail & Related papers (2022-05-04T03:29:50Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Synergy Between Semantic Segmentation and Image Denoising via Alternate
Boosting [102.19116213923614]
We propose a boosting network to perform denoising and segmentation alternately.
We observe that not only denoising helps combat the drop of segmentation accuracy due to noise, but also pixel-wise semantic information boosts the capability of denoising.
Experimental results show that the denoised image quality is improved substantially and the segmentation accuracy is improved to close to that of clean images.
arXiv Detail & Related papers (2021-02-24T06:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.