A GAN-Based Input-Size Flexibility Model for Single Image Dehazing
- URL: http://arxiv.org/abs/2102.09796v1
- Date: Fri, 19 Feb 2021 08:27:17 GMT
- Title: A GAN-Based Input-Size Flexibility Model for Single Image Dehazing
- Authors: Shichao Kan, Yue Zhang, Fanghui Zhang and Yigang Cen
- Abstract summary: This paper concentrates on the challenging task of single image dehazing.
We design a novel model to directly generate the haze-free image.
Considering this reason and various image sizes, we propose a novel input-size flexibility conditional generative adversarial network (cGAN) for single image dehazing.
- Score: 16.83211957781034
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Image-to-image translation based on generative adversarial network (GAN) has
achieved state-of-the-art performance in various image restoration
applications. Single image dehazing is a typical example, which aims to obtain
the haze-free image of a haze one. This paper concentrates on the challenging
task of single image dehazing. Based on the atmospheric scattering model, we
design a novel model to directly generate the haze-free image. The main
challenge of image dehazing is that the atmospheric scattering model has two
parameters, i.e., transmission map and atmospheric light. When we estimate them
respectively, the errors will be accumulated to compromise dehazing quality.
Considering this reason and various image sizes, we propose a novel input-size
flexibility conditional generative adversarial network (cGAN) for single image
dehazing, which is input-size flexibility at both training and test stages for
image-to-image translation with cGAN framework. We propose a simple and
effective U-type residual network (UR-Net) to combine the generator and adopt
the spatial pyramid pooling (SPP) to design the discriminator. Moreover, the
model is trained with multi-loss function, in which the consistency loss is a
novel designed loss in this paper. We finally build a multi-scale cGAN fusion
model to realize state-of-the-art single image dehazing performance. The
proposed models receive a haze image as input and directly output a haze-free
one. Experimental results demonstrate the effectiveness and efficiency of the
proposed models.
Related papers
- E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation [69.72194342962615]
We introduce and address a novel research direction: can the process of distilling GANs from diffusion models be made significantly more efficient?
First, we construct a base GAN model with generalized features, adaptable to different concepts through fine-tuning, eliminating the need for training from scratch.
Second, we identify crucial layers within the base GAN model and employ Low-Rank Adaptation (LoRA) with a simple yet effective rank search process, rather than fine-tuning the entire base model.
Third, we investigate the minimal amount of data necessary for fine-tuning, further reducing the overall training time.
arXiv Detail & Related papers (2024-01-11T18:59:14Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Breaking Through the Haze: An Advanced Non-Homogeneous Dehazing Method
based on Fast Fourier Convolution and ConvNeXt [14.917290578644424]
Haze usually leads to deteriorated images with low contrast, color shift and structural distortion.
We propose a novel two branch network that leverages 2D discrete wavelete transform (DWT), fast Fourier convolution (FFC) residual block and a pretrained ConvNeXt model.
Our model is able to effectively explore global contextual information and produce images with better perceptual quality.
arXiv Detail & Related papers (2023-05-08T02:59:02Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Towards a Unified Approach to Single Image Deraining and Dehazing [16.383099109400156]
We develop a new physical model for the rain effect and show that the well-known atmosphere scattering model (ASM) for the haze effect naturally emerges as its homogeneous continuous limit.
We also propose a Densely Scale-Connected Attentive Network (DSCAN) that is suitable for both deraining and dehazing tasks.
arXiv Detail & Related papers (2021-03-26T01:35:43Z) - Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training [3.5788754401889014]
We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
arXiv Detail & Related papers (2020-08-15T02:43:00Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.