Domain Adaptation for Image Dehazing
- URL: http://arxiv.org/abs/2005.04668v1
- Date: Sun, 10 May 2020 13:54:56 GMT
- Title: Domain Adaptation for Image Dehazing
- Authors: Yuanjie Shao, Lerenhan Li, Wenqi Ren, Changxin Gao and Nong Sang
- Abstract summary: Most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift.
We propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules.
Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.
- Score: 72.15994735131835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image dehazing using learning-based methods has achieved state-of-the-art
performance in recent years. However, most existing methods train a dehazing
model on synthetic hazy images, which are less able to generalize well to real
hazy images due to domain shift. To address this issue, we propose a domain
adaptation paradigm, which consists of an image translation module and two
image dehazing modules. Specifically, we first apply a bidirectional
translation network to bridge the gap between the synthetic and real domains by
translating images from one domain to another. And then, we use images before
and after translation to train the proposed two image dehazing networks with a
consistency constraint. In this phase, we incorporate the real hazy image into
the dehazing training via exploiting the properties of the clear image (e.g.,
dark channel prior and image gradient smoothing) to further improve the domain
adaptivity. By training image translation and dehazing network in an end-to-end
manner, we can obtain better effects of both image translation and dehazing.
Experimental results on both synthetic and real-world images demonstrate that
our model performs favorably against the state-of-the-art dehazing algorithms.
Related papers
- Enhanced Unsupervised Image-to-Image Translation Using Contrastive Learning and Histogram of Oriented Gradients [0.0]
This paper proposes an enhanced unsupervised image-to-image translation method based on the Contrastive Unpaired Translation (CUT) model.
This novel approach ensures the preservation of the semantic structure of images, even without semantic labels.
The method was tested on translating synthetic game environments from GTA5 dataset to realistic urban scenes in cityscapes dataset.
arXiv Detail & Related papers (2024-09-24T12:44:27Z) - Mutual Learning for Domain Adaptation: Self-distillation Image Dehazing
Network with Sample-cycle [7.452382358080454]
We propose a mutual learning dehazing framework for domain adaption.
Specifically, we first devise two siamese networks: a teacher network in the synthetic domain and a student network in the real domain.
We show that the framework outperforms state-of-the-art dehazing techniques in terms of subjective and objective evaluation.
arXiv Detail & Related papers (2022-03-17T16:32:14Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Semantic Consistency in Image-to-Image Translation for Unsupervised
Domain Adaptation [22.269565708490465]
Unsupervised Domain Adaptation (UDA) aims to adapt models trained on a source domain to a new target domain where no labelled data is available.
We propose a semantically consistent image-to-image translation method in combination with a consistency regularisation method for UDA.
arXiv Detail & Related papers (2021-11-05T14:22:20Z) - Self-Supervised Learning of Domain Invariant Features for Depth
Estimation [35.74969527929284]
We tackle the problem of unsupervised synthetic-to-realistic domain adaptation for single image depth estimation.
An essential building block of single image depth estimation is an encoder-decoder task network that takes RGB images as input and produces depth maps as output.
We propose a novel training strategy to force the task network to learn domain invariant representations in a self-supervised manner.
arXiv Detail & Related papers (2021-06-04T16:45:48Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - Image-to-image Mapping with Many Domains by Sparse Attribute Transfer [71.28847881318013]
Unsupervised image-to-image translation consists of learning a pair of mappings between two domains without known pairwise correspondences between points.
Current convention is to approach this task with cycle-consistent GANs.
We propose an alternate approach that directly restricts the generator to performing a simple sparse transformation in a latent layer.
arXiv Detail & Related papers (2020-06-23T19:52:23Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.