Underwater enhancement based on a self-learning strategy and attention
mechanism for high-intensity regions
- URL: http://arxiv.org/abs/2208.03319v1
- Date: Thu, 4 Aug 2022 19:55:40 GMT
- Title: Underwater enhancement based on a self-learning strategy and attention
mechanism for high-intensity regions
- Authors: Claudio D. Mello Jr., Bryan U. Moreira, Paulo J. O. Evald, Paulo L.
Drews Jr., Silvia S. Botelho
- Abstract summary: Images acquired during underwater activities suffer from environmental properties of the water, such as turbidity and light attenuation.
Recent works related to underwater image enhancement, and based on deep learning approaches, tackle the lack of paired datasets generating synthetic ground-truth.
We present a self-supervised learning methodology for underwater image enhancement based on deep learning that requires no paired datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images acquired during underwater activities suffer from environmental
properties of the water, such as turbidity and light attenuation. These
phenomena cause color distortion, blurring, and contrast reduction. In
addition, irregular ambient light distribution causes color channel unbalance
and regions with high-intensity pixels. Recent works related to underwater
image enhancement, and based on deep learning approaches, tackle the lack of
paired datasets generating synthetic ground-truth. In this paper, we present a
self-supervised learning methodology for underwater image enhancement based on
deep learning that requires no paired datasets. The proposed method estimates
the degradation present in underwater images. Besides, an autoencoder
reconstructs this image, and its output image is degraded using the estimated
degradation information. Therefore, the strategy replaces the output image with
the degraded version in the loss function during the training phase. This
procedure \textit{misleads} the neural network that learns to compensate the
additional degradation. As a result, the reconstructed image is an enhanced
version of the input image. Also, the algorithm presents an attention module to
reduce high-intensity areas generated in enhanced images by color channel
unbalances and outlier regions. Furthermore, the proposed methodology requires
no ground-truth. Besides, only real underwater images were used to train the
neural network, and the results indicate the effectiveness of the method in
terms of color preservation, color cast reduction, and contrast improvement.
Related papers
- Underwater Image Enhancement via Dehazing and Color Restoration [17.263563715287045]
Existing underwater image enhancement methods treat the haze and color cast as a unified degradation process.
We propose a Vision Transformer (ViT)-based network (referred to as WaterFormer) to improve the underwater image quality.
arXiv Detail & Related papers (2024-09-15T15:58:20Z) - Physics Informed and Data Driven Simulation of Underwater Images via
Residual Learning [5.095097384893417]
In general, underwater images suffer from color distortion and low contrast, because light is attenuated and backscattered as it propagates through water.
An existing simple degradation model (similar to atmospheric image "hazing" effects) is not sufficient to properly represent the underwater image degradation.
We propose a deep learning-based architecture to automatically simulate the underwater effects.
arXiv Detail & Related papers (2024-02-07T21:53:28Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Underwater image enhancement with Image Colorfulness Measure [7.292965806774365]
We propose a novel enhancement model, which is a trainable end-to-end neural model.
For better details, contrast and colorfulness, this enhancement network is jointly optimized by the pixel-level and characteristiclevel training criteria.
arXiv Detail & Related papers (2020-04-18T12:44:57Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.