PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators
- URL: http://arxiv.org/abs/2306.08918v1
- Date: Thu, 15 Jun 2023 07:41:12 GMT
- Title: PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators
- Authors: Runmin Cong, Wenyu Yang, Wei Zhang, Chongyi Li, Chun-Le Guo, Qingming
Huang, and Sam Kwong
- Abstract summary: How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
- Score: 120.06891448820447
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Due to the light absorption and scattering induced by the water medium,
underwater images usually suffer from some degradation problems, such as low
contrast, color distortion, and blurring details, which aggravate the
difficulty of downstream underwater understanding tasks. Therefore, how to
obtain clear and visually pleasant images has become a common concern of
people, and the task of underwater image enhancement (UIE) has also emerged as
the times require. Among existing UIE methods, Generative Adversarial Networks
(GANs) based methods perform well in visual aesthetics, while the physical
model-based methods have better scene adaptability. Inheriting the advantages
of the above two types of models, we propose a physical model-guided GAN model
for UIE in this paper, referred to as PUGAN. The entire network is under the
GAN architecture. On the one hand, we design a Parameters Estimation subnetwork
(Par-subnet) to learn the parameters for physical model inversion, and use the
generated color enhancement image as auxiliary information for the Two-Stream
Interaction Enhancement sub-network (TSIE-subnet). Meanwhile, we design a
Degradation Quantization (DQ) module in TSIE-subnet to quantize scene
degradation, thereby achieving reinforcing enhancement of key regions. On the
other hand, we design the Dual-Discriminators for the style-content adversarial
constraint, promoting the authenticity and visual aesthetics of the results.
Extensive experiments on three benchmark datasets demonstrate that our PUGAN
outperforms state-of-the-art methods in both qualitative and quantitative
metrics.
Related papers
- UIE-UnFold: Deep Unfolding Network with Color Priors and Vision Transformer for Underwater Image Enhancement [27.535028176427623]
Underwater image enhancement (UIE) plays a crucial role in various marine applications.
Current learning-based approaches frequently lack explicit prior knowledge about the physical processes involved in underwater image formation.
This paper proposes a novel deep unfolding network (DUN) for UIE that integrates color priors and inter-stage feature incorporation.
arXiv Detail & Related papers (2024-08-20T08:48:33Z) - A Physical Model-Guided Framework for Underwater Image Enhancement and Depth Estimation [19.204227769408725]
Existing underwater image enhancement approaches fail to accurately estimate imaging model parameters such as depth and veiling light.
We propose a model-guided framework for jointly training a Deep Degradation Model with any advanced UIE model.
Our framework achieves remarkable enhancement results across diverse underwater scenes.
arXiv Detail & Related papers (2024-07-05T03:10:13Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - RAUNE-Net: A Residual and Attention-Driven Underwater Image Enhancement
Method [2.6645441842326756]
Underwater image enhancement (UIE) poses challenges due to distinctive properties of the underwater environment.
In this paper, we propose a more reliable and reasonable UIE network called RAUNE-Net.
Our method obtains promising objective performance and consistent visual results across various real-world underwater images.
arXiv Detail & Related papers (2023-11-01T03:00:07Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Perceptual underwater image enhancement with deep learning and physical
priors [35.37760003463292]
We propose two perceptual enhancement models, each of which uses a deep enhancement model with a detection perceptor.
Due to the lack of training data, a hybrid underwater image synthesis model, which fuses physical priors and data-driven cues, is proposed to synthesize training data.
Experimental results show the superiority of our proposed method over several state-of-the-art methods on both real-world and synthetic underwater datasets.
arXiv Detail & Related papers (2020-08-21T22:11:34Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.