Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding
- URL: http://arxiv.org/abs/2104.13015v1
- Date: Tue, 27 Apr 2021 07:35:30 GMT
- Title: Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding
- Authors: Chongyi Li and Saeed Anwar and Junhui Hou and Runmin Cong and Chunle
Guo and Wenqi Ren
- Abstract summary: We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
- Score: 88.46682991985907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater images suffer from color casts and low contrast due to wavelength-
and distance-dependent attenuation and scattering. To solve these two
degradation issues, we present an underwater image enhancement network via
medium transmission-guided multi-color space embedding, called Ucolor.
Concretely, we first propose a multi-color space encoder network, which
enriches the diversity of feature representations by incorporating the
characteristics of different color spaces into a unified structure. Coupled
with an attention mechanism, the most discriminative features extracted from
multiple color spaces are adaptively integrated and highlighted. Inspired by
underwater imaging physical models, we design a medium transmission (indicating
the percentage of the scene radiance reaching the camera)-guided decoder
network to enhance the response of the network towards quality-degraded
regions. As a result, our network can effectively improve the visual quality of
underwater images by exploiting multiple color spaces embedding and the
advantages of both physical model-based and learning-based methods. Extensive
experiments demonstrate that our Ucolor achieves superior performance against
state-of-the-art methods in terms of both visual quality and quantitative
metrics.
Related papers
- FDCE-Net: Underwater Image Enhancement with Embedding Frequency and Dual Color Encoder [49.79611204954311]
Underwater images often suffer from various issues such as low brightness, color shift, blurred details, and noise due to absorption light and scattering caused by water and suspended particles.
Previous underwater image enhancement (UIE) methods have primarily focused on spatial domain enhancement, neglecting the frequency domain information inherent in the images.
arXiv Detail & Related papers (2024-04-27T15:16:34Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Transmission and Color-guided Network for Underwater Image Enhancement [8.894719412298397]
We propose an Adaptive Transmission and Dynamic Color guided network (named ATDCnet) for underwater image enhancement.
To exploit the knowledge of physics, we design an Adaptive Transmission-directed Module (ATM) to better guide the network.
To deal with the color deviation problem, we design a Dynamic Color-guided Module (DCM) to post-process the enhanced image color.
arXiv Detail & Related papers (2023-08-09T11:43:54Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Dif-Fusion: Towards High Color Fidelity in Infrared and Visible Image
Fusion with Diffusion Models [54.952979335638204]
We propose a novel method with diffusion models, termed as Dif-Fusion, to generate the distribution of the multi-channel input data.
Our method is more effective than other state-of-the-art image fusion methods, especially in color fidelity.
arXiv Detail & Related papers (2023-01-19T13:37:19Z) - A Wavelet-based Dual-stream Network for Underwater Image Enhancement [11.178274779143209]
We present a wavelet-based dual-stream network that addresses color cast and blurry details in underwater images.
We handle these artifacts separately by decomposing an input image into multiple frequency bands using discrete wavelet transform.
We validate the proposed method on both real-world and synthetic underwater datasets and show the effectiveness of our model in color correction and blur removal with low computational complexity.
arXiv Detail & Related papers (2022-02-17T16:57:25Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Single Image Deraining via Scale-space Invariant Attention Neural
Network [58.5284246878277]
We tackle the notion of scale that deals with visual changes in appearance of rain steaks with respect to the camera.
We propose to represent the multi-scale correlation in convolutional feature domain, which is more compact and robust than that in pixel domain.
In this way, we summarize the most activated presence of feature maps as the salient features.
arXiv Detail & Related papers (2020-06-09T04:59:26Z) - MLFcGAN: Multi-level Feature Fusion based Conditional GAN for Underwater
Image Color Correction [35.16835830904171]
We propose a deep multi-scale feature fusion net based on the conditional generative adversarial network (GAN) for underwater image color correction.
In our network, multi-scale features are extracted first, followed by augmenting local features on each scale with global features.
This design was verified to facilitate more effective and faster network learning, resulting in better performance in both color correction and detail preservation.
arXiv Detail & Related papers (2020-02-13T04:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.