You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing
Neural Network
- URL: http://arxiv.org/abs/2006.16829v1
- Date: Tue, 30 Jun 2020 14:05:47 GMT
- Title: You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing
Neural Network
- Authors: Boyun Li, Yuanbiao Gou, Shuhang Gu, Jerry Zitao Liu, Joey Tianyi Zhou,
Xi Peng
- Abstract summary: We study how to make deep learning achieve image dehazing without training on the ground-truth clean image (unsupervised) and a image collection (untrained)
An unsupervised neural network will avoid the intensive labor collection of hazy-clean image pairs, and an untrained model is a real'' single image dehazing approach.
Motivated by the layer disentanglement idea, we propose a novel method, called you only look yourself (textbfYOLY) which could be one of the first unsupervised and untrained neural networks for image dehazing.
- Score: 63.2086502120071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study two challenging and less-touched problems in single
image dehazing, namely, how to make deep learning achieve image dehazing
without training on the ground-truth clean image (unsupervised) and a image
collection (untrained). An unsupervised neural network will avoid the intensive
labor collection of hazy-clean image pairs, and an untrained model is a
``real'' single image dehazing approach which could remove haze based on only
the observed hazy image itself and no extra images is used. Motivated by the
layer disentanglement idea, we propose a novel method, called you only look
yourself (\textbf{YOLY}) which could be one of the first unsupervised and
untrained neural networks for image dehazing. In brief, YOLY employs three
jointly subnetworks to separate the observed hazy image into several latent
layers, \textit{i.e.}, scene radiance layer, transmission map layer, and
atmospheric light layer. After that, these three layers are further composed to
the hazy image in a self-supervised manner. Thanks to the unsupervised and
untrained characteristics of YOLY, our method bypasses the conventional
training paradigm of deep models on hazy-clean pairs or a large scale dataset,
thus avoids the labor-intensive data collection and the domain shift issue.
Besides, our method also provides an effective learning-based haze transfer
solution thanks to its layer disentanglement mechanism. Extensive experiments
show the promising performance of our method in image dehazing compared with 14
methods on four databases.
Related papers
- Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - SCANet: Self-Paced Semi-Curricular Attention Network for Non-Homogeneous
Image Dehazing [56.900964135228435]
Existing homogeneous dehazing methods struggle to handle the non-uniform distribution of haze in a robust manner.
We propose a novel self-paced semi-curricular attention network, called SCANet, for non-homogeneous image dehazing.
Our approach consists of an attention generator network and a scene reconstruction network.
arXiv Detail & Related papers (2023-04-17T17:05:29Z) - Non-aligned supervision for Real Image Dehazing [25.078264991940806]
We propose an innovative dehazing framework that operates under non-aligned supervision.
In particular, we explore a non-alignment scenario that a clear reference image, unaligned with the input hazy image, is utilized to supervise the dehazing network.
Our scenario makes it easier to collect hazy/clear image pairs in real-world environments, even under conditions of misalignment and shift views.
arXiv Detail & Related papers (2023-03-08T23:23:44Z) - Single image dehazing via combining the prior knowledge and CNNs [6.566615606042994]
An end-to-end system is proposed in this paper to reduce defects by combining the prior knowledge and deep learning method.
Experiments show that the proposed method achieves superior performance over existing methods.
arXiv Detail & Related papers (2021-11-10T14:18:25Z) - From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data [58.50411487497146]
We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
arXiv Detail & Related papers (2021-08-06T04:00:28Z) - Unsupervised Neural Rendering for Image Hazing [31.108654945661705]
Image hazing aims to render a hazy image from a given clean one, which could be applied to a variety of practical applications such as gaming, filming, photographic filtering, and image dehazing.
We propose a neural rendering method for image hazing, dubbed as HazeGEN. To be specific, HazeGEN is a knowledge-driven neural network which estimates the transmission map by leveraging a new prior.
To adaptively learn the airlight, we build a neural module based on another new prior, i.e., the rendered hazy image and the exemplar are similar in the airlight distribution.
arXiv Detail & Related papers (2021-07-14T13:15:14Z) - Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training [3.5788754401889014]
We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
arXiv Detail & Related papers (2020-08-15T02:43:00Z) - Learning to See Through Obstructions with Layered Decomposition [117.77024641706451]
We present a learning-based approach for removing unwanted obstructions from moving images.
Our method leverages motion differences between the background and obstructing elements to recover both layers.
We show that the proposed approach learned from synthetically generated data performs well to real images.
arXiv Detail & Related papers (2020-08-11T17:59:31Z) - Learning to See Through Obstructions [117.77024641706451]
We present a learning-based approach for removing unwanted obstructions from a short sequence of images captured by a moving camera.
Our method leverages the motion differences between the background and the obstructing elements to recover both layers.
We show that training on synthetically generated data transfers well to real images.
arXiv Detail & Related papers (2020-04-02T17:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.