SyreaNet: A Physically Guided Underwater Image Enhancement Framework
Integrating Synthetic and Real Images
- URL: http://arxiv.org/abs/2302.08269v2
- Date: Thu, 25 May 2023 23:21:33 GMT
- Title: SyreaNet: A Physically Guided Underwater Image Enhancement Framework
Integrating Synthetic and Real Images
- Authors: Junjie Wen, Jinqiang Cui, Zhenjun Zhao, Ruixin Yan, Zhi Gao, Lihua
Dou, Ben M. Chen
- Abstract summary: Underwater image enhancement (UIE) is vital for high-level vision-related underwater tasks.
We propose a framework textitSyreaNet for UIE that integrates both synthetic and real data.
- Score: 17.471353846746474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater image enhancement (UIE) is vital for high-level vision-related
underwater tasks. Although learning-based UIE methods have made remarkable
achievements in recent years, it's still challenging for them to consistently
deal with various underwater conditions, which could be caused by: 1) the use
of the simplified atmospheric image formation model in UIE may result in severe
errors; 2) the network trained solely with synthetic images might have
difficulty in generalizing well to real underwater images. In this work, we,
for the first time, propose a framework \textit{SyreaNet} for UIE that
integrates both synthetic and real data under the guidance of the revised
underwater image formation model and novel domain adaptation (DA) strategies.
First, an underwater image synthesis module based on the revised model is
proposed. Then, a physically guided disentangled network is designed to predict
the clear images by combining both synthetic and real underwater images. The
intra- and inter-domain gaps are abridged by fully exchanging the domain
knowledge. Extensive experiments demonstrate the superiority of our framework
over other state-of-the-art (SOTA) learning-based UIE methods qualitatively and
quantitatively. The code and dataset are publicly available at
https://github.com/RockWenJJ/SyreaNet.git.
Related papers
- Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - MetaUE: Model-based Meta-learning for Underwater Image Enhancement [25.174894007563374]
This paper proposes a model-based deep learning method for restoring clean images under various underwater scenarios.
The meta-learning strategy is used to obtain a pre-trained model on the synthetic underwater dataset.
The model is then fine-tuned on real underwater datasets to obtain a reliable underwater image enhancement model, called MetaUE.
arXiv Detail & Related papers (2023-03-12T02:38:50Z) - Domain Adaptation for Underwater Image Enhancement via Content and Style
Separation [7.077978580799124]
Underwater image suffer from color cast, low contrast and hazy effect due to light absorption, refraction and scattering.
Recent learning-based methods demonstrate astonishing performance on underwater image enhancement.
We propose a domain adaptation framework for underwater image enhancement via content and style separation.
arXiv Detail & Related papers (2022-02-17T09:30:29Z) - Domain Adaptation for Underwater Image Enhancement [51.71570701102219]
We propose a novel Two-phase Underwater Domain Adaptation network (TUDA) to minimize the inter-domain and intra-domain gap.
In the first phase, a new dual-alignment network is designed, including a translation part for enhancing realism of input images, followed by an enhancement part.
In the second phase, we perform an easy-hard classification of real data according to the assessed quality of enhanced images, where a rank-based underwater quality assessment method is embedded.
arXiv Detail & Related papers (2021-08-22T06:38:19Z) - Single Underwater Image Enhancement Using an Analysis-Synthesis Network [21.866940227491146]
Most deep models for underwater image enhancement resort to training on synthetic datasets based on underwater image formation models.
A new underwater synthetic dataset is first established, in which a revised ambient light synthesis equation is embedded.
A unified framework, named ANA-SYN, can effectively enhance underwater images under collaborations of priors and data information.
arXiv Detail & Related papers (2021-08-20T06:29:12Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Domain Adaptation for Image Dehazing [72.15994735131835]
Most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift.
We propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules.
Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.
arXiv Detail & Related papers (2020-05-10T13:54:56Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.