UIEC^2-Net: CNN-based Underwater Image Enhancement Using Two Color Space
- URL: http://arxiv.org/abs/2103.07138v1
- Date: Fri, 12 Mar 2021 08:23:21 GMT
- Title: UIEC^2-Net: CNN-based Underwater Image Enhancement Using Two Color Space
- Authors: Yudong Wang, Jichang Guo, Huan Gao, Huihui Yue
- Abstract summary: We propose an end-to-end trainable network, consisting of three blocks as follow: a RGB pixel-level block, a HSV global-adjust block for globally adjusting underwater image luminance, color and saturation, and an attention map block for combining the advantages of RGB and HSV block output images by distributing weight to each pixel.
Experimental results on synthetic and real-world underwater images show the good performance of our proposed method in both subjective comparisons and objective metrics.
- Score: 9.318613337883097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Underwater image enhancement has attracted much attention due to the rise of
marine resource development in recent years. Benefit from the powerful
representation capabilities of Convolution Neural Networks(CNNs), multiple
underwater image enhancement algorithms based on CNNs have been proposed in the
last few years. However, almost all of these algorithms employ RGB color space
setting, which is insensitive to image properties such as luminance and
saturation. To address this problem, we proposed Underwater Image Enhancement
Convolution Neural Network using 2 Color Space (UICE^2-Net) that efficiently
and effectively integrate both RGB Color Space and HSV Color Space in one
single CNN. To our best knowledge, this method is the first to use HSV color
space for underwater image enhancement based on deep learning. UIEC^2-Net is an
end-to-end trainable network, consisting of three blocks as follow: a RGB
pixel-level block implements fundamental operations such as denoising and
removing color cast, a HSV global-adjust block for globally adjusting
underwater image luminance, color and saturation by adopting a novel neural
curve layer, and an attention map block for combining the advantages of RGB and
HSV block output images by distributing weight to each pixel. Experimental
results on synthetic and real-world underwater images show the good performance
of our proposed method in both subjective comparisons and objective metrics.
Related papers
- Multispectral Texture Synthesis using RGB Convolutional Neural Networks [2.3213238782019316]
State-of-the-art RGB texture synthesis algorithms rely on style distances that are computed through statistics of deep features.
We propose two solutions to extend these methods to multispectral imaging.
arXiv Detail & Related papers (2024-10-21T13:49:54Z) - FDCE-Net: Underwater Image Enhancement with Embedding Frequency and Dual Color Encoder [49.79611204954311]
Underwater images often suffer from various issues such as low brightness, color shift, blurred details, and noise due to absorption light and scattering caused by water and suspended particles.
Previous underwater image enhancement (UIE) methods have primarily focused on spatial domain enhancement, neglecting the frequency domain information inherent in the images.
arXiv Detail & Related papers (2024-04-27T15:16:34Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Toward Sufficient Spatial-Frequency Interaction for Gradient-aware
Underwater Image Enhancement [5.553172974022233]
We develop a novel Underwater image enhancement (UIE) framework based on spatial-frequency interaction and gradient maps.
Experimental results on two real-world underwater image datasets show that our approach can successfully enhance underwater images.
arXiv Detail & Related papers (2023-09-08T02:58:17Z) - USLN: A statistically guided lightweight network for underwater image
enhancement via dual-statistic white balance and multi-color space stretch [7.169484500968534]
We propose a statistically guided lightweight underwater image enhancement network (USLN)
USLN learns to compensate the color distortion for each specific pixel.
Experiments show that, with the guidance of statistics, USLN significantly reduces the required network capacity.
arXiv Detail & Related papers (2022-09-06T05:05:44Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Low Light Image Enhancement via Global and Local Context Modeling [164.85287246243956]
We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
arXiv Detail & Related papers (2021-01-04T09:40:54Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.