USLN: A statistically guided lightweight network for underwater image
enhancement via dual-statistic white balance and multi-color space stretch
- URL: http://arxiv.org/abs/2209.02221v1
- Date: Tue, 6 Sep 2022 05:05:44 GMT
- Title: USLN: A statistically guided lightweight network for underwater image
enhancement via dual-statistic white balance and multi-color space stretch
- Authors: Ziyuan Xiao, Yina Han, Susanto Rahardja, and Yuanliang Ma
- Abstract summary: We propose a statistically guided lightweight underwater image enhancement network (USLN)
USLN learns to compensate the color distortion for each specific pixel.
Experiments show that, with the guidance of statistics, USLN significantly reduces the required network capacity.
- Score: 7.169484500968534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater images are inevitably affected by color distortion and reduced
contrast. Traditional statistic-based methods such as white balance and
histogram stretching attempted to adjust the imbalance of color channels and
narrow distribution of intensities a priori thus with limited performance.
Recently, deep-learning-based methods have achieved encouraging results.
However, the involved complicate architecture and high computational costs may
hinder their deployment in practical constrained platforms. Inspired by above
works, we propose a statistically guided lightweight underwater image
enhancement network (USLN). Concretely, we first develop a dual-statistic white
balance module which can learn to use both average and maximum of images to
compensate the color distortion for each specific pixel. Then this is followed
by a multi-color space stretch module to adjust the histogram distribution in
RGB, HSI, and Lab color spaces adaptively. Extensive experiments show that,
with the guidance of statistics, USLN significantly reduces the required
network capacity (over98%) and achieves state-of-the-art performance. The code
and relevant resources are available at https://github.com/deepxzy/USLN.
Related papers
- Every Pixel Has its Moments: Ultra-High-Resolution Unpaired Image-to-Image Translation via Dense Normalization [4.349838917565205]
We introduce a Dense Normalization layer designed to estimate pixel-level statistical moments.
This approach effectively diminishes tiling artifacts while concurrently preserving local color and hue contrasts.
Our work paves the way for future exploration in handling images of arbitrary resolutions within the realm of unpaired image-to-image translation.
arXiv Detail & Related papers (2024-07-05T04:14:50Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Mitigating Channel-wise Noise for Single Image Super Resolution [33.383282898248076]
We propose to super-resolve noisy color images by considering the color channels jointly.
Results demonstrate the super-resolving capability of the approach in real scenarios.
arXiv Detail & Related papers (2021-12-14T17:45:15Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - UIEC^2-Net: CNN-based Underwater Image Enhancement Using Two Color Space [9.318613337883097]
We propose an end-to-end trainable network, consisting of three blocks as follow: a RGB pixel-level block, a HSV global-adjust block for globally adjusting underwater image luminance, color and saturation, and an attention map block for combining the advantages of RGB and HSV block output images by distributing weight to each pixel.
Experimental results on synthetic and real-world underwater images show the good performance of our proposed method in both subjective comparisons and objective metrics.
arXiv Detail & Related papers (2021-03-12T08:23:21Z) - Low Light Image Enhancement via Global and Local Context Modeling [164.85287246243956]
We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
arXiv Detail & Related papers (2021-01-04T09:40:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.