UCL-Dehaze: Towards Real-world Image Dehazing via Unsupervised
Contrastive Learning
- URL: http://arxiv.org/abs/2205.01871v1
- Date: Wed, 4 May 2022 03:25:13 GMT
- Title: UCL-Dehaze: Towards Real-world Image Dehazing via Unsupervised
Contrastive Learning
- Authors: Yongzhen Wang, Xuefeng Yan, Fu Lee Wang, Haoran Xie, Wenhan Yang,
Mingqiang Wei, Jing Qin
- Abstract summary: This paper explores contrastive learning with an adversarial training effort to leverage unpaired real-world hazy and clean images.
We propose an effective unsupervised contrastive learning paradigm for image dehazing, dubbed UCL-Dehaze.
We conduct comprehensive experiments to evaluate our UCL-Dehaze and demonstrate its superiority over the state-of-the-arts.
- Score: 57.40713083410888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the wisdom of training an image dehazing model on synthetic hazy data
can alleviate the difficulty of collecting real-world hazy/clean image pairs,
it brings the well-known domain shift problem. From a different yet new
perspective, this paper explores contrastive learning with an adversarial
training effort to leverage unpaired real-world hazy and clean images, thus
bridging the gap between synthetic and real-world haze is avoided. We propose
an effective unsupervised contrastive learning paradigm for image dehazing,
dubbed UCL-Dehaze. Unpaired real-world clean and hazy images are easily
captured, and will serve as the important positive and negative samples
respectively when training our UCL-Dehaze network. To train the network more
effectively, we formulate a new self-contrastive perceptual loss function,
which encourages the restored images to approach the positive samples and keep
away from the negative samples in the embedding space. Besides the overall
network architecture of UCL-Dehaze, adversarial training is utilized to align
the distributions between the positive samples and the dehazed images. Compared
with recent image dehazing works, UCL-Dehaze does not require paired data
during training and utilizes unpaired positive/negative data to better enhance
the dehazing performance. We conduct comprehensive experiments to evaluate our
UCL-Dehaze and demonstrate its superiority over the state-of-the-arts, even
only 1,800 unpaired real-world images are used to train our network. Source
code has been available at https://github.com/yz-wang/UCL-Dehaze.
Related papers
- WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning [17.129068060454255]
Single image dehazing is essential for applications such as autonomous driving and surveillance.
We propose an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform.
Our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods.
arXiv Detail & Related papers (2024-10-07T05:36:11Z) - HazeCLIP: Towards Language Guided Real-World Image Dehazing [62.4454483961341]
Existing methods have achieved remarkable performance in single image dehazing, particularly on synthetic datasets.
This paper introduces HazeCLIP, a language-guided adaptation framework designed to enhance the real-world performance of pre-trained dehazing networks.
arXiv Detail & Related papers (2024-07-18T17:18:25Z) - Transformer-based Clipped Contrastive Quantization Learning for
Unsupervised Image Retrieval [15.982022297570108]
Unsupervised image retrieval aims to learn the important visual characteristics without any given level to retrieve the similar images for a given query image.
In this paper, we propose a TransClippedCLR model by encoding the global context of an image using Transformer having local context through patch based processing.
Results using the proposed clipped contrastive learning are greatly improved on all datasets as compared to same backbone network with vanilla contrastive learning.
arXiv Detail & Related papers (2024-01-27T09:39:11Z) - Towards Generic Image Manipulation Detection with Weakly-Supervised
Self-Consistency Learning [49.43362803584032]
We propose weakly-supervised image manipulation detection.
Such a setting can leverage more training images and has the potential to adapt quickly to new manipulation techniques.
Two consistency properties are learned: multi-source consistency (MSC) and inter-patch consistency (IPC)
arXiv Detail & Related papers (2023-09-03T19:19:56Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data [58.50411487497146]
We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
arXiv Detail & Related papers (2021-08-06T04:00:28Z) - Contrastive Learning for Compact Single Image Dehazing [41.83007400559068]
We propose a novel contrastive regularization (CR) built upon contrastive learning to exploit both the information of hazy images and clear images as negative and positive samples.
CR ensures that the restored image is pulled to closer to the clear image and pushed to far away from the hazy image in the representation space.
Considering trade-off between performance and memory storage, we develop a compact dehazing network based on autoencoder-like framework.
arXiv Detail & Related papers (2021-04-19T14:56:21Z) - Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training [3.5788754401889014]
We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
arXiv Detail & Related papers (2020-08-15T02:43:00Z) - You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing
Neural Network [63.2086502120071]
We study how to make deep learning achieve image dehazing without training on the ground-truth clean image (unsupervised) and a image collection (untrained)
An unsupervised neural network will avoid the intensive labor collection of hazy-clean image pairs, and an untrained model is a real'' single image dehazing approach.
Motivated by the layer disentanglement idea, we propose a novel method, called you only look yourself (textbfYOLY) which could be one of the first unsupervised and untrained neural networks for image dehazing.
arXiv Detail & Related papers (2020-06-30T14:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.