Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training
- URL: http://arxiv.org/abs/2008.06632v1
- Date: Sat, 15 Aug 2020 02:43:00 GMT
- Title: Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training
- Authors: Zahra Anvari, Vassilis Athitsos
- Abstract summary: We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
- Score: 3.5788754401889014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single image de-hazing is a challenging problem, and it is far from solved.
Most current solutions require paired image datasets that include both hazy
images and their corresponding haze-free ground-truth images. However, in
reality, lighting conditions and other factors can produce a range of haze-free
images that can serve as ground truth for a hazy image, and a single ground
truth image cannot capture that range. This limits the scalability and
practicality of paired image datasets in real-world applications. In this
paper, we focus on unpaired single image de-hazing and we do not rely on the
ground truth image or physical scattering model. We reduce the image de-hazing
problem to an image-to-image translation problem and propose a dehazing
Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN).
Generator network of Dehaze-GLCGAN combines an encoder-decoder architecture
with residual blocks to better recover the haze free scene. We also employ a
global-local discriminator structure to deal with spatially varying haze.
Through ablation study, we demonstrate the effectiveness of different factors
in the performance of the proposed network. Our extensive experiments over
three benchmark datasets show that our network outperforms previous work in
terms of PSNR and SSIM while being trained on smaller amount of data compared
to other methods.
Related papers
- WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning [17.129068060454255]
Single image dehazing is essential for applications such as autonomous driving and surveillance.
We propose an enhanced semi-supervised dehazing network that integrates Contrastive Loss and Discrete Wavelet Transform.
Our proposed algorithm achieves superior performance and improved robustness compared to state-of-the-art single image dehazing methods.
arXiv Detail & Related papers (2024-10-07T05:36:11Z) - Non-aligned supervision for Real Image Dehazing [25.078264991940806]
We propose an innovative dehazing framework that operates under non-aligned supervision.
In particular, we explore a non-alignment scenario that a clear reference image, unaligned with the input hazy image, is utilized to supervise the dehazing network.
Our scenario makes it easier to collect hazy/clear image pairs in real-world environments, even under conditions of misalignment and shift views.
arXiv Detail & Related papers (2023-03-08T23:23:44Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Single Image Dehazing with An Independent Detail-Recovery Network [117.86146907611054]
We propose a single image dehazing method with an independent Detail Recovery Network (DRN)
The DRN aims to recover the dehazed image details through local and global branches respectively.
Our method outperforms the state-of-the-art dehazing methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-09-22T02:49:43Z) - From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data [58.50411487497146]
We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
arXiv Detail & Related papers (2021-08-06T04:00:28Z) - Non-Homogeneous Haze Removal via Artificial Scene Prior and
Bidimensional Graph Reasoning [52.07698484363237]
We propose a Non-Homogeneous Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph reasoning.
Our method achieves superior performance over many state-of-the-art algorithms for both the single image dehazing and hazy image understanding tasks.
arXiv Detail & Related papers (2021-04-05T13:04:44Z) - A GAN-Based Input-Size Flexibility Model for Single Image Dehazing [16.83211957781034]
This paper concentrates on the challenging task of single image dehazing.
We design a novel model to directly generate the haze-free image.
Considering this reason and various image sizes, we propose a novel input-size flexibility conditional generative adversarial network (cGAN) for single image dehazing.
arXiv Detail & Related papers (2021-02-19T08:27:17Z) - NH-HAZE: An Image Dehazing Benchmark with Non-Homogeneous Hazy and
Haze-Free Images [95.00583228823446]
NH-HAZE is a non-homogeneous realistic dataset with pairs of real hazy and corresponding haze-free images.
This work presents an objective assessment of several state-of-the-art single image dehazing methods that were evaluated using NH-HAZE dataset.
arXiv Detail & Related papers (2020-05-07T15:50:37Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.