UIEDP:Underwater Image Enhancement with Diffusion Prior
- URL: http://arxiv.org/abs/2312.06240v1
- Date: Mon, 11 Dec 2023 09:24:52 GMT
- Title: UIEDP:Underwater Image Enhancement with Diffusion Prior
- Authors: Dazhao Du, Enhan Li, Lingyu Si, Fanjiang Xu, Jianwei Niu, Fuchun Sun
- Abstract summary: Underwater image enhancement (UIE) aims to generate clear images from low-quality underwater images.
We propose UIEDP, a novel framework treating UIE as a posterior distribution sampling process of clear images conditioned on degraded underwater inputs.
- Score: 20.349103580702028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater image enhancement (UIE) aims to generate clear images from
low-quality underwater images. Due to the unavailability of clear reference
images, researchers often synthesize them to construct paired datasets for
training deep models. However, these synthesized images may sometimes lack
quality, adversely affecting training outcomes. To address this issue, we
propose UIE with Diffusion Prior (UIEDP), a novel framework treating UIE as a
posterior distribution sampling process of clear images conditioned on degraded
underwater inputs. Specifically, UIEDP combines a pre-trained diffusion model
capturing natural image priors with any existing UIE algorithm, leveraging the
latter to guide conditional generation. The diffusion prior mitigates the
drawbacks of inferior synthetic images, resulting in higher-quality image
generation. Extensive experiments have demonstrated that our UIEDP yields
significant improvements across various metrics, especially no-reference image
quality assessment. And the generated enhanced images also exhibit a more
natural appearance.
Related papers
- RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
We propose a novel IQA method called diffusion priors-based IQA (DP-IQA)
We use pre-trained stable diffusion as the backbone, extract multi-level features from the denoising U-Net, and decode them to estimate the image quality score.
We distill the knowledge in the above model into a CNN-based student model, significantly reducing the parameter to enhance applicability.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Underwater Image Enhancement by Diffusion Model with Customized CLIP-Classifier [5.352081564604589]
Underwater Image Enhancement (UIE) aims to improve the visual quality from a low-quality input.
We propose CLIP-UIE, a novel framework that leverages the potential of Contrastive Language-Image Pretraining (CLIP) for the UIE task.
Specifically, we propose employing color transfer to yield synthetic images by degrading in-air natural images into corresponding underwater images, guided by the real underwater domain.
arXiv Detail & Related papers (2024-05-25T12:56:15Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - WaterFlow: Heuristic Normalizing Flow for Underwater Image Enhancement
and Beyond [52.27796682972484]
Existing underwater image enhancement methods mainly focus on image quality improvement, ignoring the effect on practice.
We propose a normalizing flow for detection-driven underwater image enhancement, dubbed WaterFlow.
Considering the differentiability and interpretability, we incorporate the prior into the data-driven mapping procedure.
arXiv Detail & Related papers (2023-08-02T04:17:35Z) - Physics-Aware Semi-Supervised Underwater Image Enhancement [7.634972737905042]
We leverage both the physics-based underwater Image Formation Model (IFM) and deep learning techniques for Underwater Image Enhancement (UIE)
We propose a novel Physics-Aware Dual-Stream Underwater Image Enhancement Network, i.e., PA-UIENet, which comprises a Transmission Estimation Steam (T-Stream) and an Ambient Light Estimation Stream (A-Stream)
Our method performs better than, or at least comparably to, eight baselines across five testing sets in the degradation estimation and UIE tasks.
arXiv Detail & Related papers (2023-07-21T10:10:18Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Adaptive Uncertainty Distribution in Deep Learning for Unsupervised
Underwater Image Enhancement [1.9249287163937976]
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data.
We propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model.
We show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics.
arXiv Detail & Related papers (2022-12-18T01:07:20Z) - Multiscale Structure Guided Diffusion for Image Deblurring [24.09642909404091]
Diffusion Probabilistic Models (DPMs) have been employed for image deblurring.
We introduce a simple yet effective multiscale structure guidance as an implicit bias.
We demonstrate more robust deblurring results with fewer artifacts on unseen data.
arXiv Detail & Related papers (2022-12-04T10:40:35Z) - UIF: An Objective Quality Assessment for Underwater Image Enhancement [17.145844358253164]
We propose an Underwater Image Fidelity (UIF) metric for objective evaluation of enhanced underwater images.
By exploiting the statistical features of these images, we present to extract naturalness-related, sharpness-related, and structure-related features.
Experimental results confirm that the proposed UIF outperforms a variety of underwater and general-purpose image quality metrics.
arXiv Detail & Related papers (2022-05-19T08:43:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.