RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors
- URL: http://arxiv.org/abs/2304.03994v1
- Date: Sat, 8 Apr 2023 12:12:24 GMT
- Title: RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors
- Authors: Rui-Qi Wu, Zheng-Peng Duan, Chun-Le Guo, Zhi Chai, Chong-Yi Li
- Abstract summary: Existing dehazing approaches struggle to process real-world hazy images owing to the lack of paired real data and robust priors.
We present a new paradigm for real image dehazing from the perspectives of synthesizing more realistic hazy data.
- Score: 14.432465539590481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing dehazing approaches struggle to process real-world hazy images owing
to the lack of paired real data and robust priors. In this work, we present a
new paradigm for real image dehazing from the perspectives of synthesizing more
realistic hazy data and introducing more robust priors into the network.
Specifically, (1) instead of adopting the de facto physical scattering model,
we rethink the degradation of real hazy images and propose a phenomenological
pipeline considering diverse degradation types. (2) We propose a Real Image
Dehazing network via high-quality Codebook Priors (RIDCP). Firstly, a VQGAN is
pre-trained on a large-scale high-quality dataset to obtain the discrete
codebook, encapsulating high-quality priors (HQPs). After replacing the
negative effects brought by haze with HQPs, the decoder equipped with a novel
normalized feature alignment module can effectively utilize high-quality
features and produce clean results. However, although our degradation pipeline
drastically mitigates the domain gap between synthetic and real data, it is
still intractable to avoid it, which challenges HQPs matching in the wild.
Thus, we re-calculate the distance when matching the features to the HQPs by a
controllable matching operation, which facilitates finding better counterparts.
We provide a recommendation to control the matching based on an explainable
solution. Users can also flexibly adjust the enhancement degree as per their
preference. Extensive experiments verify the effectiveness of our data
synthesis pipeline and the superior performance of RIDCP in real image
dehazing.
Related papers
- Realistic Extreme Image Rescaling via Generative Latent Space Learning [51.85790402171696]
We propose a novel framework called Latent Space Based Image Rescaling (LSBIR) for extreme image rescaling tasks.
LSBIR effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model to generate realistic HR images.
In the first stage, a pseudo-invertible encoder-decoder models the bidirectional mapping between the latent features of the HR image and the target-sized LR image.
In the second stage, the reconstructed features from the first stage are refined by a pre-trained diffusion model to generate more faithful and visually pleasing details.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - Real-world Image Dehazing with Coherence-based Label Generator and Cooperative Unfolding Network [50.31598963315055]
Real-world Image Dehazing aims to alleviate haze-induced degradation in real-world settings.
We introduce a cooperative unfolding network that jointly models atmospheric scattering and image scenes.
We also propose the first RID-oriented iterative mean-teacher framework, termed the Coherence-based Label Generator.
arXiv Detail & Related papers (2024-06-12T07:44:22Z) - Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption [57.056311855630916]
We propose a Controllable Generative Image Compression framework, Control-GIC.
It is capable of fine-grained adaption across a broad spectrum while ensuring high-fidelity and generality compression.
We develop a conditional conditionalization that can trace back to historic encoded multi-granularity representations.
arXiv Detail & Related papers (2024-06-02T14:22:09Z) - VQCNIR: Clearer Night Image Restoration with Vector-Quantized Codebook [16.20461368096512]
Night photography often struggles with challenges like low light and blurring, stemming from dark environments and prolonged exposures.
We believe in the strength of data-driven high-quality priors and strive to offer a reliable and consistent prior, circumventing the restrictions of manual priors.
We propose Clearer Night Image Restoration with Vector-Quantized Codebook (VQCNIR) to achieve remarkable and consistent restoration outcomes on real-world and synthetic benchmarks.
arXiv Detail & Related papers (2023-12-14T02:16:27Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Dual Associated Encoder for Face Restoration [68.49568459672076]
We propose a novel dual-branch framework named DAEFR to restore facial details from low-quality (LQ) images.
Our method introduces an auxiliary LQ branch that extracts crucial information from the LQ inputs.
We evaluate the effectiveness of DAEFR on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-08-14T17:58:33Z) - ReContrast: Domain-Specific Anomaly Detection via Contrastive
Reconstruction [29.370142078092375]
Most advanced unsupervised anomaly detection (UAD) methods rely on modeling feature representations of frozen encoder networks pre-trained on large-scale datasets.
We propose a novel epistemic UAD method, namely ReContrast, which optimize the entire network to reduce biases towards the pre-trained image domain.
We conduct experiments across two popular industrial defect detection benchmarks and three medical image UAD tasks, which shows our superiority over current state-of-the-art methods.
arXiv Detail & Related papers (2023-06-05T05:21:15Z) - High-Perceptual Quality JPEG Decoding via Posterior Sampling [13.238373528922194]
We propose a different paradigm for JPEG artifact correction.
We aim to obtain sharp, detailed and visually reconstructed images, while being consistent with the compressed input.
Our solution offers a diverse set of plausible and fast reconstructions for a given input with perfect consistency.
arXiv Detail & Related papers (2022-11-21T19:47:59Z) - Towards Robust Blind Face Restoration with Codebook Lookup Transformer [94.48731935629066]
Blind face restoration is a highly ill-posed problem that often requires auxiliary guidance.
We show that a learned discrete codebook prior in a small proxy space cast blind face restoration as a code prediction task.
We propose a Transformer-based prediction network, named CodeFormer, to model global composition and context of the low-quality faces.
arXiv Detail & Related papers (2022-06-22T17:58:01Z) - SelFSR: Self-Conditioned Face Super-Resolution in the Wild via Flow
Field Degradation Network [12.976199676093442]
We propose a novel domain-adaptive degradation network for face super-resolution in the wild.
Our model achieves state-of-the-art performance on both CelebA and real-world face dataset.
arXiv Detail & Related papers (2021-12-20T17:04:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.