Single image dehazing via combining the prior knowledge and CNNs
- URL: http://arxiv.org/abs/2111.05701v1
- Date: Wed, 10 Nov 2021 14:18:25 GMT
- Title: Single image dehazing via combining the prior knowledge and CNNs
- Authors: Yuwen Li, Chaobing Zheng, Shiqian Wu, Wangming Xu
- Abstract summary: An end-to-end system is proposed in this paper to reduce defects by combining the prior knowledge and deep learning method.
Experiments show that the proposed method achieves superior performance over existing methods.
- Score: 6.566615606042994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aiming at the existing single image haze removal algorithms, which are based
on prior knowledge and assumptions, subject to many limitations in practical
applications, and could suffer from noise and halo amplification. An end-to-end
system is proposed in this paper to reduce defects by combining the prior
knowledge and deep learning method. The haze image is decomposed into the base
layer and detail layers through a weighted guided image filter (WGIF) firstly,
and the airlight is estimated from the base layer. Then, the base layer image
is passed to the efficient deep convolutional network for estimating the
transmission map. To restore object close to the camera completely without
amplifying noise in sky or heavily hazy scene, an adaptive strategy is proposed
based on the value of the transmission map. If the transmission map of a pixel
is small, the base layer of the haze image is used to recover a haze-free image
via atmospheric scattering model, finally. Otherwise, the haze image is used.
Experiments show that the proposed method achieves superior performance over
existing methods.
Related papers
- Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN [60.257791714663725]
We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
arXiv Detail & Related papers (2022-12-23T03:00:28Z) - See Blue Sky: Deep Image Dehaze Using Paired and Unpaired Training
Images [73.23687409870656]
We propose a cycle generative adversarial network to construct a novel end-to-end image dehaze model.
We adopt outdoor image datasets to train our model, which includes a set of real-world unpaired image dataset and a set of paired image dataset.
Based on the cycle structure, our model adds four different kinds of loss function to constrain the effect including adversarial loss, cycle consistency loss, photorealism loss and paired L1 loss.
arXiv Detail & Related papers (2022-10-14T07:45:33Z) - Dual-Scale Single Image Dehazing Via Neural Augmentation [29.019279446792623]
A novel single image dehazing algorithm is introduced by combining model-based and data-driven approaches.
Results indicate that the proposed algorithm can remove haze well from real-world and synthetic hazy images.
arXiv Detail & Related papers (2022-09-13T11:56:03Z) - Model-Based Single Image Deep Dehazing [20.39952114471173]
A novel single image dehazing algorithm is introduced by fusing model-based and data-driven approaches.
Experimental results indicate that the proposed algorithm can remove haze well from real-world and synthetic hazy images.
arXiv Detail & Related papers (2021-11-22T01:57:51Z) - Multi-Scale Single Image Dehazing Using Laplacian and Gaussian Pyramids [17.99612951030546]
Ambiguity between object radiance and haze and noise amplification in sky regions are two inherent problems of model driven single image dehazing.
A novel haze line averaging is proposed to reduce the morphological artifacts caused by the DDAP.
A multi-scale dehazing algorithm is then proposed to address the latter problem by adopting Laplacian and Guassian pyramids.
arXiv Detail & Related papers (2021-11-10T14:17:58Z) - Unsupervised Neural Rendering for Image Hazing [31.108654945661705]
Image hazing aims to render a hazy image from a given clean one, which could be applied to a variety of practical applications such as gaming, filming, photographic filtering, and image dehazing.
We propose a neural rendering method for image hazing, dubbed as HazeGEN. To be specific, HazeGEN is a knowledge-driven neural network which estimates the transmission map by leveraging a new prior.
To adaptively learn the airlight, we build a neural module based on another new prior, i.e., the rendered hazy image and the exemplar are similar in the airlight distribution.
arXiv Detail & Related papers (2021-07-14T13:15:14Z) - Learning to See Through Obstructions with Layered Decomposition [117.77024641706451]
We present a learning-based approach for removing unwanted obstructions from moving images.
Our method leverages motion differences between the background and obstructing elements to recover both layers.
We show that the proposed approach learned from synthetically generated data performs well to real images.
arXiv Detail & Related papers (2020-08-11T17:59:31Z) - Learning to Restore a Single Face Image Degraded by Atmospheric
Turbulence using CNNs [93.72048616001064]
Images captured under such condition suffer from a combination of geometric deformation and space varying blur.
We present a deep learning-based solution to the problem of restoring a turbulence-degraded face image.
arXiv Detail & Related papers (2020-07-16T15:25:08Z) - You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing
Neural Network [63.2086502120071]
We study how to make deep learning achieve image dehazing without training on the ground-truth clean image (unsupervised) and a image collection (untrained)
An unsupervised neural network will avoid the intensive labor collection of hazy-clean image pairs, and an untrained model is a real'' single image dehazing approach.
Motivated by the layer disentanglement idea, we propose a novel method, called you only look yourself (textbfYOLY) which could be one of the first unsupervised and untrained neural networks for image dehazing.
arXiv Detail & Related papers (2020-06-30T14:05:47Z) - Learning to See Through Obstructions [117.77024641706451]
We present a learning-based approach for removing unwanted obstructions from a short sequence of images captured by a moving camera.
Our method leverages the motion differences between the background and the obstructing elements to recover both layers.
We show that training on synthetically generated data transfers well to real images.
arXiv Detail & Related papers (2020-04-02T17:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.