Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN
- URL: http://arxiv.org/abs/2212.12116v1
- Date: Fri, 23 Dec 2022 03:00:28 GMT
- Title: Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN
- Authors: Yaozong Mo, Chaofeng Li, Wenqi Ren, Shaopeng Shang, Wenwu Wang, and
Xiao-jun Wu
- Abstract summary: We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
- Score: 60.257791714663725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based methods have achieved significant performance for image
defogging. However, existing methods are mainly developed for land scenes and
perform poorly when dealing with overwater foggy images, since overwater scenes
typically contain large expanses of sky and water. In this work, we propose a
Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater
scenes. To promote the recovery of the objects on water in the image, two loss
functions are exploited for the network where a prior map is designed to invert
the dark channel and the min-max normalization is used to suppress the sky and
emphasize objects. However, due to the unpaired training set, the network may
learn an under-constrained domain mapping from foggy to fog-free image, leading
to artifacts and loss of details. Thus, we propose an intuitive Upscaling
Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC)
to mitigate this issue. Extensive experiments on qualitative and quantitative
comparisons demonstrate that the proposed method outperforms the
state-of-the-art supervised, semi-supervised, and unsupervised defogging
approaches.
Related papers
- DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Physics-Aware Semi-Supervised Underwater Image Enhancement [7.634972737905042]
We leverage both the physics-based underwater Image Formation Model (IFM) and deep learning techniques for Underwater Image Enhancement (UIE)
We propose a novel Physics-Aware Dual-Stream Underwater Image Enhancement Network, i.e., PA-UIENet, which comprises a Transmission Estimation Steam (T-Stream) and an Ambient Light Estimation Stream (A-Stream)
Our method performs better than, or at least comparably to, eight baselines across five testing sets in the degradation estimation and UIE tasks.
arXiv Detail & Related papers (2023-07-21T10:10:18Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - FG-Depth: Flow-Guided Unsupervised Monocular Depth Estimation [17.572459787107427]
We propose a flow distillation loss to replace the typical photometric loss and a prior flow based mask to remove invalid pixels.
Our approach achieves state-of-the-art results on both KITTI and NYU-Depth-v2 datasets.
arXiv Detail & Related papers (2023-01-20T04:02:13Z) - Single image dehazing via combining the prior knowledge and CNNs [6.566615606042994]
An end-to-end system is proposed in this paper to reduce defects by combining the prior knowledge and deep learning method.
Experiments show that the proposed method achieves superior performance over existing methods.
arXiv Detail & Related papers (2021-11-10T14:18:25Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Progressive Depth Learning for Single Image Dehazing [56.71963910162241]
Existing dehazing methods often ignore the depth cues and fail in distant areas where heavier haze disturbs the visibility.
We propose a deep end-to-end model that iteratively estimates image depths and transmission maps.
Our approach benefits from explicitly modeling the inner relationship of image depth and transmission map, which is especially effective for distant hazy areas.
arXiv Detail & Related papers (2021-02-21T05:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.