FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing
- URL: http://arxiv.org/abs/2001.06968v2
- Date: Wed, 24 Mar 2021 08:27:44 GMT
- Title: FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing
- Authors: Yu Dong, Yihao Liu, He Zhang, Shifeng Chen, Yu Qiao
- Abstract summary: We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
- Score: 48.65974971543703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, convolutional neural networks (CNNs) have achieved great
improvements in single image dehazing and attained much attention in research.
Most existing learning-based dehazing methods are not fully end-to-end, which
still follow the traditional dehazing procedure: first estimate the medium
transmission and the atmospheric light, then recover the haze-free image based
on the atmospheric scattering model. However, in practice, due to lack of
priors and constraints, it is hard to precisely estimate these intermediate
parameters. Inaccurate estimation further degrades the performance of dehazing,
resulting in artifacts, color distortion and insufficient haze removal. To
address this, we propose a fully end-to-end Generative Adversarial Networks
with Fusion-discriminator (FD-GAN) for image dehazing. With the proposed
Fusion-discriminator which takes frequency information as additional priors,
our model can generator more natural and realistic dehazed images with less
color distortion and fewer artifacts. Moreover, we synthesize a large-scale
training dataset including various indoor and outdoor hazy images to boost the
performance and we reveal that for learning-based dehazing methods, the
performance is strictly influenced by the training data. Experiments have shown
that our method reaches state-of-the-art performance on both public synthetic
datasets and real-world images with more visually pleasing dehazed results.
Related papers
- LMHaze: Intensity-aware Image Dehazing with a Large-scale Multi-intensity Real Haze Dataset [14.141433473509826]
We present LMHaze, a large-scale, high-quality real-world dataset.
LMHaze comprises paired hazy and haze-free images captured in diverse indoor and outdoor environments.
To better handle images with different haze intensities, we propose a mixture-of-experts model based on Mamba.
arXiv Detail & Related papers (2024-10-21T15:20:02Z) - One Step Diffusion-based Super-Resolution with Time-Aware Distillation [60.262651082672235]
Diffusion-based image super-resolution (SR) methods have shown promise in reconstructing high-resolution images with fine details from low-resolution counterparts.
Recent techniques have been devised to enhance the sampling efficiency of diffusion-based SR models via knowledge distillation.
We propose a time-aware diffusion distillation method, named TAD-SR, to accomplish effective and efficient image super-resolution.
arXiv Detail & Related papers (2024-08-14T11:47:22Z) - RSHazeDiff: A Unified Fourier-aware Diffusion Model for Remote Sensing Image Dehazing [32.16602874389847]
Haze severely degrades the visual quality of remote sensing images.
We propose a novel unified Fourier-aware diffusion model for remote sensing image dehazing, termed RSHazeDiff.
Experiments on both synthetic and real-world benchmarks validate the favorable performance of RSHazeDiff over state-of-the-art methods.
arXiv Detail & Related papers (2024-05-15T04:22:27Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Frequency Compensated Diffusion Model for Real-scene Dehazing [6.105813272271171]
We consider a dehazing framework based on conditional diffusion models for improved generalization to real haze.
The proposed dehazing diffusion model significantly outperforms state-of-the-art methods on real-world images.
arXiv Detail & Related papers (2023-08-21T06:50:44Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Denoising Diffusion Models for Plug-and-Play Image Restoration [135.6359475784627]
This paper proposes DiffPIR, which integrates the traditional plug-and-play method into the diffusion sampling framework.
Compared to plug-and-play IR methods that rely on discriminative Gaussian denoisers, DiffPIR is expected to inherit the generative ability of diffusion models.
arXiv Detail & Related papers (2023-05-15T20:24:38Z) - From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data [58.50411487497146]
We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
arXiv Detail & Related papers (2021-08-06T04:00:28Z) - Advanced Multiple Linear Regression Based Dark Channel Prior Applied on
Dehazing Image and Generating Synthetic Haze [0.6875312133832078]
Authors propose a multiple linear regression haze removal model based on a widely adopted dehazing algorithm named Dark Channel Prior.
To increase object detection accuracy in the hazy environment, the authors present an algorithm to build a synthetic hazy COCO training dataset.
arXiv Detail & Related papers (2021-03-12T03:32:08Z) - Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training [3.5788754401889014]
We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
arXiv Detail & Related papers (2020-08-15T02:43:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.