Adaptive Uncertainty Distribution in Deep Learning for Unsupervised
Underwater Image Enhancement
- URL: http://arxiv.org/abs/2212.08983v1
- Date: Sun, 18 Dec 2022 01:07:20 GMT
- Title: Adaptive Uncertainty Distribution in Deep Learning for Unsupervised
Underwater Image Enhancement
- Authors: Alzayat Saleh, Marcus Sheaves, Dean Jerry, and Mostafa Rahimi Azghadi
- Abstract summary: One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data.
We propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model.
We show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics.
- Score: 1.9249287163937976
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: One of the main challenges in deep learning-based underwater image
enhancement is the limited availability of high-quality training data.
Underwater images are difficult to capture and are often of poor quality due to
the distortion and loss of colour and contrast in water. This makes it
difficult to train supervised deep learning models on large and diverse
datasets, which can limit the model's performance. In this paper, we explore an
alternative approach to supervised underwater image enhancement. Specifically,
we propose a novel unsupervised underwater image enhancement framework that
employs a conditional variational autoencoder (cVAE) to train a deep learning
model with probabilistic adaptive instance normalization (PAdaIN) and
statistically guided multi-colour space stretch that produces realistic
underwater images. The resulting framework is composed of a U-Net as a feature
extractor and a PAdaIN to encode the uncertainty, which we call UDnet. To
improve the visual quality of the images generated by UDnet, we use a
statistically guided multi-colour space stretch module that ensures visual
consistency with the input image and provides an alternative to training using
a ground truth image. The proposed model does not need manual human annotation
and can learn with a limited amount of data and achieves state-of-the-art
results on underwater images. We evaluated our proposed framework on eight
publicly-available datasets. The results show that our proposed framework
yields competitive performance compared to other state-of-the-art approaches in
quantitative as well as qualitative metrics. Code available at
https://github.com/alzayats/UDnet .
Related papers
- UIE-UnFold: Deep Unfolding Network with Color Priors and Vision Transformer for Underwater Image Enhancement [27.535028176427623]
Underwater image enhancement (UIE) plays a crucial role in various marine applications.
Current learning-based approaches frequently lack explicit prior knowledge about the physical processes involved in underwater image formation.
This paper proposes a novel deep unfolding network (DUN) for UIE that integrates color priors and inter-stage feature incorporation.
arXiv Detail & Related papers (2024-08-20T08:48:33Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - MetaUE: Model-based Meta-learning for Underwater Image Enhancement [25.174894007563374]
This paper proposes a model-based deep learning method for restoring clean images under various underwater scenarios.
The meta-learning strategy is used to obtain a pre-trained model on the synthetic underwater dataset.
The model is then fine-tuned on real underwater datasets to obtain a reliable underwater image enhancement model, called MetaUE.
arXiv Detail & Related papers (2023-03-12T02:38:50Z) - Improving GAN Training via Feature Space Shrinkage [69.98365478398593]
We propose AdaptiveMix, which shrinks regions of training data in the image representation space of the discriminator.
Considering it is intractable to directly bound feature space, we propose to construct hard samples and narrow down the feature distance between hard and easy samples.
The evaluation results demonstrate that our AdaptiveMix can facilitate the training of GANs and effectively improve the image quality of generated samples.
arXiv Detail & Related papers (2023-03-02T20:22:24Z) - Domain Adaptation for Underwater Image Enhancement via Content and Style
Separation [7.077978580799124]
Underwater image suffer from color cast, low contrast and hazy effect due to light absorption, refraction and scattering.
Recent learning-based methods demonstrate astonishing performance on underwater image enhancement.
We propose a domain adaptation framework for underwater image enhancement via content and style separation.
arXiv Detail & Related papers (2022-02-17T09:30:29Z) - SGUIE-Net: Semantic Attention Guided Underwater Image Enhancement with
Multi-Scale Perception [18.87163028415309]
We propose a novel underwater image enhancement network, called SGUIE-Net.
We introduce semantic information as high-level guidance across different images that share common semantic regions.
This strategy helps to achieve robust and visually pleasant enhancements to different semantic objects.
arXiv Detail & Related papers (2022-01-08T14:03:24Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.