LC-FDNet: Learned Lossless Image Compression with Frequency
Decomposition Network
- URL: http://arxiv.org/abs/2112.06417v1
- Date: Mon, 13 Dec 2021 04:49:34 GMT
- Title: LC-FDNet: Learned Lossless Image Compression with Frequency
Decomposition Network
- Authors: Hochang Rhee, Yeong Il Jang, Seyun Kim, Nam Ik Cho
- Abstract summary: Recent learning-based image compression methods do not consider the performance drop in the high-frequency region.
We propose a new method that proceeds the encoding in a coarse-to-fine manner to separate and process low and high-frequency regions differently.
Experiments show that the proposed method achieves state-of-the-art performance for benchmark high-resolution datasets.
- Score: 14.848279912686948
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent learning-based lossless image compression methods encode an image in
the unit of subimages and achieve comparable performances to conventional
non-learning algorithms. However, these methods do not consider the performance
drop in the high-frequency region, giving equal consideration to the low and
high-frequency areas. In this paper, we propose a new lossless image
compression method that proceeds the encoding in a coarse-to-fine manner to
separate and process low and high-frequency regions differently. We initially
compress the low-frequency components and then use them as additional input for
encoding the remaining high-frequency region. The low-frequency components act
as a strong prior in this case, which leads to improved estimation in the
high-frequency area. In addition, we design the frequency decomposition process
to be adaptive to color channel, spatial location, and image characteristics.
As a result, our method derives an image-specific optimal ratio of
low/high-frequency components. Experiments show that the proposed method
achieves state-of-the-art performance for benchmark high-resolution datasets.
Related papers
- WaveDH: Wavelet Sub-bands Guided ConvNet for Efficient Image Dehazing [20.094839751816806]
We introduce WaveDH, a novel and compact ConvNet designed to address this efficiency gap in image dehazing.
Our WaveDH leverages wavelet sub-bands for guided up-and-downsampling and frequency-aware feature refinement.
Our method, WaveDH, outperforms many state-of-the-art methods on several image dehazing benchmarks with significantly reduced computational costs.
arXiv Detail & Related papers (2024-04-02T02:52:05Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - End-to-End Optimized Image Compression with the Frequency-Oriented
Transform [8.27145506280741]
We propose the end-to-end optimized image compression model facilitated by the frequency-oriented transform.
The model enables scalable coding through the selective transmission of arbitrary frequency components.
Our model outperforms all traditional codecs including next-generation standard H.266/VVC on MS-SSIM metric.
arXiv Detail & Related papers (2024-01-16T08:16:10Z) - Hierarchical Disentangled Representation for Invertible Image Denoising
and Beyond [14.432771193620702]
Inspired by a latent observation that noise tends to appear in the high-frequency part of the image, we propose a fully invertible denoising method.
We decompose the noisy image into clean low-frequency and hybrid high-frequency parts with an invertible transformation.
In this way, denoising is made tractable by inversely merging noiseless low and high-frequency parts.
arXiv Detail & Related papers (2023-01-31T01:24:34Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Exploring Inter-frequency Guidance of Image for Lightweight Gaussian
Denoising [1.52292571922932]
We propose a novel network architecture denoted as IGNet, in order to refine the frequency bands from low to high in a progressive manner.
With this design, more inter-frequency prior and information are utilized, thus the model size can be lightened while still perserves competitive results.
arXiv Detail & Related papers (2021-12-22T10:35:53Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Deep Unfolded Recovery of Sub-Nyquist Sampled Ultrasound Image [94.42139459221784]
We propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, that is based on unfolding the ISTA algorithm.
Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high quality imaging performance.
arXiv Detail & Related papers (2021-03-01T19:19:38Z) - Generalized Octave Convolutions for Learned Multi-Frequency Image
Compression [20.504561050200365]
We propose the first learned multi-frequency image compression and entropy coding approach.
It is based on the recently developed octave convolutions to factorize the latents into high and low frequency (resolution) components.
We show that the proposed generalized octave convolution can improve the performance of other auto-encoder-based computer vision tasks.
arXiv Detail & Related papers (2020-02-24T01:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.