L^2UWE: A Framework for the Efficient Enhancement of Low-Light
Underwater Images Using Local Contrast and Multi-Scale Fusion
- URL: http://arxiv.org/abs/2005.13736v2
- Date: Thu, 5 Nov 2020 21:26:23 GMT
- Title: L^2UWE: A Framework for the Efficient Enhancement of Low-Light
Underwater Images Using Local Contrast and Multi-Scale Fusion
- Authors: Tunai Porto Marques, Alexandra Branzan Albu
- Abstract summary: We present a novel single-image low-light underwater image enhancer, L2UWE, that builds on our observation that an efficient model of atmospheric lighting can be derived from local contrast information.
A multi-scale fusion process is employed to combine these images while emphasizing regions of higher luminance, saliency and local contrast.
We demonstrate the performance of L2UWE by using seven metrics to test it against seven state-of-the-art enhancement methods specific to underwater and low-light scenes.
- Score: 84.11514688735183
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Images captured underwater often suffer from suboptimal illumination settings
that can hide important visual features, reducing their quality. We present a
novel single-image low-light underwater image enhancer, L^2UWE, that builds on
our observation that an efficient model of atmospheric lighting can be derived
from local contrast information. We create two distinct models and generate two
enhanced images from them: one that highlights finer details, the other focused
on darkness removal. A multi-scale fusion process is employed to combine these
images while emphasizing regions of higher luminance, saliency and local
contrast. We demonstrate the performance of L^2UWE by using seven metrics to
test it against seven state-of-the-art enhancement methods specific to
underwater and low-light scenes. Code available at:
https://github.com/tunai/l2uwe.
Related papers
- Dual High-Order Total Variation Model for Underwater Image Restoration [13.789310785350484]
Underwater image enhancement and restoration (UIER) is one crucial mode to improve the visual quality of underwater images.
We propose an effective variational framework based on an extended underwater image formation model (UIFM)
In our proposed framework, the weight factors-based color compensation is combined with the color balance to compensate for the attenuated color channels and remove the color cast.
arXiv Detail & Related papers (2024-07-20T13:06:37Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Low-Light Video Enhancement with Synthetic Event Guidance [188.7256236851872]
We use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos.
Our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets.
arXiv Detail & Related papers (2022-08-23T14:58:29Z) - Decoupled Low-light Image Enhancement [21.111831640136835]
We propose to decouple the enhancement model into two sequential stages.
The first stage focuses on improving the scene visibility based on a pixel-wise non-linear mapping.
The second stage focuses on improving the appearance fidelity by suppressing the rest degeneration factors.
arXiv Detail & Related papers (2021-11-29T11:15:38Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details [17.25188250076639]
We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
arXiv Detail & Related papers (2021-01-20T09:39:57Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.