Unpaired Deep Image Dehazing Using Contrastive Disentanglement Learning
- URL: http://arxiv.org/abs/2203.07677v1
- Date: Tue, 15 Mar 2022 06:45:03 GMT
- Title: Unpaired Deep Image Dehazing Using Contrastive Disentanglement Learning
- Authors: Xiang Chen, Zhentao Fan, Zhuoran Zheng, Yufeng Li, Yufeng Huang,
Longgang Dai, Caihua Kong, Pengpeng Li
- Abstract summary: We present an effective unpaired learning based image dehazing network from an unpaired set of clear and hazy images.
We show that our method performs favorably against existing state-of-the-art unpaired dehazing approaches.
- Score: 36.24651058888557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an effective unpaired learning based image dehazing network from
an unpaired set of clear and hazy images. This paper provides a new perspective
to treat image dehazing as a two-class separated factor disentanglement task,
i.e, the task-relevant factor of clear image reconstruction and the
task-irrelevant factor of haze-relevant distribution. To achieve the
disentanglement of these two-class factors in deep feature space, contrastive
learning is introduced into a CycleGAN framework to learn disentangled
representations by guiding the generated images to be associated with latent
factors. With such formulation, the proposed contrastive disentangled dehazing
method (CDD-GAN) first develops negative generators to cooperate with the
encoder network to update alternately, so as to produce a queue of challenging
negative adversaries. Then these negative adversaries are trained end-to-end
together with the backbone representation network to enhance the discriminative
information and promote factor disentanglement performance by maximizing the
adversarial contrastive loss. During the training, we further show that hard
negative examples can suppress the task-irrelevant factors and unpaired clear
exemples can enhance the task-relevant factors, in order to better facilitate
haze removal and help image restoration. Extensive experiments on both
synthetic and real-world datasets demonstrate that our method performs
favorably against existing state-of-the-art unpaired dehazing approaches.
Related papers
- DRACO-DehazeNet: An Efficient Image Dehazing Network Combining Detail Recovery and a Novel Contrastive Learning Paradigm [3.649619954898362]
Detail Recovery And Contrastive DehazeNet is a detailed image recovery network that tailors enhancements to specific dehazed scene contexts.
A major innovation is its ability to train effectively with limited data, achieved through a novel quadruplet loss-based contrastive dehazing paradigm.
arXiv Detail & Related papers (2024-10-18T16:48:31Z) - Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - Robust image representations with counterfactual contrastive learning [17.273155534515393]
We introduce counterfactual contrastive learning, a novel framework leveraging recent advances in causal image synthesis.
Our method, evaluated across five datasets, outperforms standard contrastive learning in terms of robustness to acquisition shift.
Further experiments show that the proposed framework extends beyond acquisition shifts, with models trained with counterfactual contrastive learning substantially improving subgroup performance across biological sex.
arXiv Detail & Related papers (2024-09-16T15:11:00Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - ExCon: Explanation-driven Supervised Contrastive Learning for Image
Classification [12.109442912963969]
We propose to leverage saliency-based explanation methods to create content-preserving masked augmentations for contrastive learning.
Our novel explanation-driven supervised contrastive learning (ExCon) methodology critically serves the dual goals of encouraging nearby image embeddings to have similar content and explanation.
We demonstrate that ExCon outperforms vanilla supervised contrastive learning in terms of classification, explanation quality, adversarial robustness as well as calibration of probabilistic predictions of the model in the context of distributional shift.
arXiv Detail & Related papers (2021-11-28T23:15:26Z) - Contrastive Learning based Hybrid Networks for Long-Tailed Image
Classification [31.647639786095993]
We propose a novel hybrid network structure composed of a supervised contrastive loss to learn image representations and a cross-entropy loss to learn classifiers.
Experiments on three long-tailed classification datasets demonstrate the advantage of the proposed contrastive learning based hybrid networks in long-tailed classification.
arXiv Detail & Related papers (2021-03-26T05:22:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.