From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data
- URL: http://arxiv.org/abs/2108.02934v1
- Date: Fri, 6 Aug 2021 04:00:28 GMT
- Title: From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data
- Authors: Ye Liu and Lei Zhu and Shunda Pei and Huazhu Fu and Jing Qin and Qing
Zhang and Liang Wan and Wei Feng
- Abstract summary: We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
- Score: 58.50411487497146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single image dehazing is a challenging task, for which the domain shift
between synthetic training data and real-world testing images usually leads to
degradation of existing methods. To address this issue, we propose a novel
image dehazing framework collaborating with unlabeled real data. First, we
develop a disentangled image dehazing network (DID-Net), which disentangles the
feature representations into three component maps, i.e. the latent haze-free
image, the transmission map, and the global atmospheric light estimate,
respecting the physical model of a haze process. Our DID-Net predicts the three
component maps by progressively integrating features across scales, and refines
each map by passing an independent refinement network. Then a
disentangled-consistency mean-teacher network (DMT-Net) is employed to
collaborate unlabeled real data for boosting single image dehazing.
Specifically, we encourage the coarse predictions and refinements of each
disentangled component to be consistent between the student and teacher
networks by using a consistency loss on unlabeled real data. We make comparison
with 13 state-of-the-art dehazing methods on a new collected dataset (Haze4K)
and two widely-used dehazing datasets (i.e., SOTS and HazeRD), as well as on
real-world hazy images. Experimental results demonstrate that our method has
obvious quantitative and qualitative improvements over the existing methods.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Hardness-Aware Scene Synthesis for Semi-Supervised 3D Object Detection [59.33188668341604]
3D object detection serves as the fundamental task of autonomous driving perception.
It is costly to obtain high-quality annotations for point cloud data.
We propose a hardness-aware scene synthesis (HASS) method to generate adaptive synthetic scenes.
arXiv Detail & Related papers (2024-05-27T17:59:23Z) - Learning Zero-Shot Material States Segmentation, by Implanting Natural Image Patterns in Synthetic Data [0.555174246084229]
This work aims to bridge the gap by infusing patterns automatically extracted from real-world images into synthetic data.
We present the first comprehensive benchmark for zero-shot material state segmentation.
We also share 300,000 extracted textures and SVBRDF/PBR materials to facilitate future generation.
arXiv Detail & Related papers (2024-03-05T20:21:49Z) - Achieving Domain Robustness in Stereo Matching Networks by Removing
Shortcut Learning [14.497880004212979]
We show that learning of features in the synthetic domain is heavily influenced by two "shortcuts" presented in the synthetic data.
We will show that by removing such shortcuts, we can achieve domain robustness in the state-of-the-art stereo matching frameworks.
arXiv Detail & Related papers (2021-06-15T23:22:54Z) - Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training [3.5788754401889014]
We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
arXiv Detail & Related papers (2020-08-15T02:43:00Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.