Multiple Latent Space Mapping for Compressed Dark Image Enhancement
- URL: http://arxiv.org/abs/2403.07622v1
- Date: Tue, 12 Mar 2024 13:05:51 GMT
- Title: Multiple Latent Space Mapping for Compressed Dark Image Enhancement
- Authors: Yi Zeng, Zhengning Wang, Yuxuan Liu, Tianjiao Zeng, Xuhang Liu,
Xinglong Luo, Shuaicheng Liu, Shuyuan Zhu and Bing Zeng
- Abstract summary: Existing dark image enhancement methods take compressed dark images as inputs and achieve great performance.
We propose a novel latent mapping network based on variational auto-encoder (VAE)
Comprehensive experiments demonstrate that the proposed method achieves state-of-the-art performance in compressed dark image enhancement.
- Score: 51.112925890246444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dark image enhancement aims at converting dark images to normal-light images.
Existing dark image enhancement methods take uncompressed dark images as inputs
and achieve great performance. However, in practice, dark images are often
compressed before storage or transmission over the Internet. Current methods
get poor performance when processing compressed dark images. Artifacts hidden
in the dark regions are amplified by current methods, which results in
uncomfortable visual effects for observers. Based on this observation, this
study aims at enhancing compressed dark images while avoiding compression
artifacts amplification. Since texture details intertwine with compression
artifacts in compressed dark images, detail enhancement and blocking artifacts
suppression contradict each other in image space. Therefore, we handle the task
in latent space. To this end, we propose a novel latent mapping network based
on variational auto-encoder (VAE). Firstly, different from previous VAE-based
methods with single-resolution features only, we exploit multiple latent spaces
with multi-resolution features, to reduce the detail blur and improve image
fidelity. Specifically, we train two multi-level VAEs to project compressed
dark images and normal-light images into their latent spaces respectively.
Secondly, we leverage a latent mapping network to transform features from
compressed dark space to normal-light space. Specifically, since the
degradation models of darkness and compression are different from each other,
the latent mapping process is divided mapping into enlightening branch and
deblocking branch. Comprehensive experiments demonstrate that the proposed
method achieves state-of-the-art performance in compressed dark image
enhancement.
Related papers
- You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Perceptual Image Compression with Cooperative Cross-Modal Side
Information [53.356714177243745]
We propose a novel deep image compression method with text-guided side information to achieve a better rate-perception-distortion tradeoff.
Specifically, we employ the CLIP text encoder and an effective Semantic-Spatial Aware block to fuse the text and image features.
arXiv Detail & Related papers (2023-11-23T08:31:11Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Deep Metric Color Embeddings for Splicing Localization in Severely
Degraded Images [10.091921099426294]
We explore an alternative approach to splicing detection, which is potentially better suited for images in-the-wild.
We learn a deep metric space that is on one hand sensitive to illumination color and camera white-point estimation, but on the other hand insensitive to variations in object color.
In our evaluation, we show that the proposed embedding space outperforms the state of the art on images that have been subject to strong compression and downsampling.
arXiv Detail & Related papers (2022-06-21T21:28:40Z) - Metric Learning for Anti-Compression Facial Forgery Detection [32.33501564446107]
We propose a novel anti-compression facial forgery detection framework.
It learns a compression-insensitive embedding feature space utilizing both original and compressed forgeries.
arXiv Detail & Related papers (2021-03-15T14:11:14Z) - Lossy Image Compression with Normalizing Flows [19.817005399746467]
State-of-the-art solutions for deep image compression typically employ autoencoders which map the input to a lower dimensional latent space.
In contrast, traditional approaches in image compression allow for a larger range of quality levels.
arXiv Detail & Related papers (2020-08-24T14:46:23Z) - What's in the Image? Explorable Decoding of Compressed Images [45.22726784749359]
We develop a novel decoder architecture for the ubiquitous JPEG standard, which allows traversing the set of decompressed images.
We exemplify our framework on graphical, medical and forensic use cases, demonstrating its wide range of potential applications.
arXiv Detail & Related papers (2020-06-16T17:15:44Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.