Underwater Image Enhancement via Dehazing and Color Restoration
- URL: http://arxiv.org/abs/2409.09779v1
- Date: Sun, 15 Sep 2024 15:58:20 GMT
- Title: Underwater Image Enhancement via Dehazing and Color Restoration
- Authors: Chengqin Wu, Shuai Yu, Qingson Hu, Jingxiang Xu, Lijun Zhang,
- Abstract summary: Existing underwater image enhancement methods treat the haze and color cast as a unified degradation process.
We propose a Vision Transformer (ViT)-based network (referred to as WaterFormer) to improve the underwater image quality.
- Score: 17.263563715287045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of marine engineering projects such as marine resource extraction and oceanic surveys, underwater visual imaging and analysis has become a critical technology. Unfortunately, due to the inevitable non-linear attenuation of light in underwater environments, underwater images and videos often suffer from low contrast, blurriness, and color degradation, which significantly complicate the subsequent research. Existing underwater image enhancement methods often treat the haze and color cast as a unified degradation process and disregard their independence and interdependence, which limits the performance improvement. Here, we propose a Vision Transformer (ViT)-based network (referred to as WaterFormer) to improve the underwater image quality. WaterFormer contains three major components: a dehazing block (DehazeFormer Block) to capture the self-correlated haze features and extract deep-level features, a Color Restoration Block (CRB) to capture self-correlated color cast features, and a Channel Fusion Block (CFB) to capture fusion features within the network. To ensure authenticity, a soft reconstruction layer based on the underwater imaging physics model is included. To improve the quality of the enhanced images, we introduce the Chromatic Consistency Loss and Sobel Color Loss to train the network. Comprehensive experimental results demonstrate that WaterFormer outperforms other state-of-the-art methods in enhancing underwater images.
Related papers
- Underwater Image Enhancement with Cascaded Contrastive Learning [51.897854142606725]
Underwater image enhancement (UIE) is a highly challenging task due to the complexity of underwater environment and the diversity of underwater image degradation.
Most of the existing deep learning-based UIE methods follow a single-stage network which cannot effectively address the diverse degradations simultaneously.
We propose a two-stage deep learning framework and taking advantage of cascaded contrastive learning to guide the network training of each stage.
arXiv Detail & Related papers (2024-11-16T03:16:44Z) - FDCE-Net: Underwater Image Enhancement with Embedding Frequency and Dual Color Encoder [49.79611204954311]
Underwater images often suffer from various issues such as low brightness, color shift, blurred details, and noise due to absorption light and scattering caused by water and suspended particles.
Previous underwater image enhancement (UIE) methods have primarily focused on spatial domain enhancement, neglecting the frequency domain information inherent in the images.
arXiv Detail & Related papers (2024-04-27T15:16:34Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - DeepSeeColor: Realtime Adaptive Color Correction for Autonomous
Underwater Vehicles via Deep Learning Methods [0.0]
DeepSeeColor is a novel algorithm that combines a state-of-the-art underwater image formation model with the efficiency of deep learning frameworks.
We show that DeepSeeColor offers comparable performance to the popular "Sea-Thru" algorithm while being able to rapidly process images at up to 60Hz.
arXiv Detail & Related papers (2023-03-07T16:38:50Z) - Underwater enhancement based on a self-learning strategy and attention
mechanism for high-intensity regions [0.0]
Images acquired during underwater activities suffer from environmental properties of the water, such as turbidity and light attenuation.
Recent works related to underwater image enhancement, and based on deep learning approaches, tackle the lack of paired datasets generating synthetic ground-truth.
We present a self-supervised learning methodology for underwater image enhancement based on deep learning that requires no paired datasets.
arXiv Detail & Related papers (2022-08-04T19:55:40Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Underwater image enhancement with Image Colorfulness Measure [7.292965806774365]
We propose a novel enhancement model, which is a trainable end-to-end neural model.
For better details, contrast and colorfulness, this enhancement network is jointly optimized by the pixel-level and characteristiclevel training criteria.
arXiv Detail & Related papers (2020-04-18T12:44:57Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.