Single Underwater Image Enhancement Using an Analysis-Synthesis Network
- URL: http://arxiv.org/abs/2108.09023v1
- Date: Fri, 20 Aug 2021 06:29:12 GMT
- Title: Single Underwater Image Enhancement Using an Analysis-Synthesis Network
- Authors: Zhengyong Wang, Liquan Shen, Mei Yu, Yufei Lin and Qiuyu Zhu
- Abstract summary: Most deep models for underwater image enhancement resort to training on synthetic datasets based on underwater image formation models.
A new underwater synthetic dataset is first established, in which a revised ambient light synthesis equation is embedded.
A unified framework, named ANA-SYN, can effectively enhance underwater images under collaborations of priors and data information.
- Score: 21.866940227491146
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most deep models for underwater image enhancement resort to training on
synthetic datasets based on underwater image formation models. Although
promising performances have been achieved, they are still limited by two
problems: (1) existing underwater image synthesis models have an intrinsic
limitation, in which the homogeneous ambient light is usually randomly
generated and many important dependencies are ignored, and thus the synthesized
training data cannot adequately express characteristics of real underwater
environments; (2) most of deep models disregard lots of favorable underwater
priors and heavily rely on training data, which extensively limits their
application ranges. To address these limitations, a new underwater synthetic
dataset is first established, in which a revised ambient light synthesis
equation is embedded. The revised equation explicitly defines the complex
mathematical relationship among intensity values of the ambient light in RGB
channels and many dependencies such as surface-object depth, water types, etc,
which helps to better simulate real underwater scene appearances. Secondly, a
unified framework is proposed, named ANA-SYN, which can effectively enhance
underwater images under collaborations of priors (underwater domain knowledge)
and data information (underwater distortion distribution). The proposed
framework includes an analysis network and a synthesis network, one for priors
exploration and another for priors integration. To exploit more accurate
priors, the significance of each prior for the input image is explored in the
analysis network and an adaptive weighting module is designed to dynamically
recalibrate them. Meanwhile, a novel prior guidance module is introduced in the
synthesis network, which effectively aggregates the prior and data features and
thus provides better hybrid information to perform the more reasonable image
enhancement.
Related papers
- UIE-UnFold: Deep Unfolding Network with Color Priors and Vision Transformer for Underwater Image Enhancement [27.535028176427623]
Underwater image enhancement (UIE) plays a crucial role in various marine applications.
Current learning-based approaches frequently lack explicit prior knowledge about the physical processes involved in underwater image formation.
This paper proposes a novel deep unfolding network (DUN) for UIE that integrates color priors and inter-stage feature incorporation.
arXiv Detail & Related papers (2024-08-20T08:48:33Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - SyreaNet: A Physically Guided Underwater Image Enhancement Framework
Integrating Synthetic and Real Images [17.471353846746474]
Underwater image enhancement (UIE) is vital for high-level vision-related underwater tasks.
We propose a framework textitSyreaNet for UIE that integrates both synthetic and real data.
arXiv Detail & Related papers (2023-02-16T12:57:52Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - DeepRM: Deep Recurrent Matching for 6D Pose Refinement [77.34726150561087]
DeepRM is a novel recurrent network architecture for 6D pose refinement.
The architecture incorporates LSTM units to propagate information through each refinement step.
DeepRM achieves state-of-the-art performance on two widely accepted challenging datasets.
arXiv Detail & Related papers (2022-05-28T16:18:08Z) - Domain Adaptation for Underwater Image Enhancement [51.71570701102219]
We propose a novel Two-phase Underwater Domain Adaptation network (TUDA) to minimize the inter-domain and intra-domain gap.
In the first phase, a new dual-alignment network is designed, including a translation part for enhancing realism of input images, followed by an enhancement part.
In the second phase, we perform an easy-hard classification of real data according to the assessed quality of enhanced images, where a rank-based underwater quality assessment method is embedded.
arXiv Detail & Related papers (2021-08-22T06:38:19Z) - Shallow-UWnet : Compressed Model for Underwater Image Enhancement [0.0]
We propose a shallow neural network architecture, textbfShallow-UWnet which maintains performance and has fewer parameters than the state-of-art models.
We also demonstrated the benchmarking of our model by its performance on combination of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-01-06T14:49:29Z) - Perceptual underwater image enhancement with deep learning and physical
priors [35.37760003463292]
We propose two perceptual enhancement models, each of which uses a deep enhancement model with a detection perceptor.
Due to the lack of training data, a hybrid underwater image synthesis model, which fuses physical priors and data-driven cues, is proposed to synthesize training data.
Experimental results show the superiority of our proposed method over several state-of-the-art methods on both real-world and synthetic underwater datasets.
arXiv Detail & Related papers (2020-08-21T22:11:34Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.