Extreme Low-Light Imaging with Multi-granulation Cooperative Networks
- URL: http://arxiv.org/abs/2005.08001v1
- Date: Sat, 16 May 2020 14:26:06 GMT
- Title: Extreme Low-Light Imaging with Multi-granulation Cooperative Networks
- Authors: Keqi Wang, Peng Gao, Steven Hoi, Qian Guo, Yuhua Qian
- Abstract summary: Low-light imaging is challenging since images may appear to be dark and noised due to low signal-to-noise ratio, complex image content, and variety in shooting scenes in extreme low-light condition.
Many methods have been proposed to enhance the imaging quality under extreme low-light conditions, but it remains difficult to obtain satisfactory results.
- Score: 18.438827277749525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light imaging is challenging since images may appear to be dark and
noised due to low signal-to-noise ratio, complex image content, and the variety
in shooting scenes in extreme low-light condition. Many methods have been
proposed to enhance the imaging quality under extreme low-light conditions, but
it remains difficult to obtain satisfactory results, especially when they
attempt to retain high dynamic range (HDR). In this paper, we propose a novel
method of multi-granulation cooperative networks (MCN) with bidirectional
information flow to enhance extreme low-light images, and design an
illumination map estimation function (IMEF) to preserve high dynamic range
(HDR). To facilitate this research, we also contribute to create a new
benchmark dataset of real-world Dark High Dynamic Range (DHDR) images to
evaluate the performance of high dynamic preservation in low light environment.
Experimental results show that the proposed method outperforms the
state-of-the-art approaches in terms of both visual effects and quantitative
analysis.
Related papers
- LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - CDAN: Convolutional dense attention-guided network for low-light image enhancement [2.2530496464901106]
Low-light images pose challenges of diminished clarity, muted colors, and reduced details.
This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images.
CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections.
arXiv Detail & Related papers (2023-08-24T16:22:05Z) - Single Image LDR to HDR Conversion using Conditional Diffusion [18.466814193413487]
Digital imaging aims to replicate realistic scenes, but Low Dynamic Range (LDR) cameras cannot represent the wide dynamic range of real scenes.
This paper presents a deep learning-based approach for recovering intricate details from shadows and highlights.
We incorporate a deep-based autoencoder in our proposed framework to enhance the quality of the latent representation of LDR image used for conditioning.
arXiv Detail & Related papers (2023-07-06T07:19:47Z) - Diffusion in the Dark: A Diffusion Model for Low-Light Text Recognition [78.50328335703914]
Diffusion in the Dark (DiD) is a diffusion model for low-light image reconstruction for text recognition.
We demonstrate that DiD, without any task-specific optimization, can outperform SOTA low-light methods in low-light text recognition on real images.
arXiv Detail & Related papers (2023-03-07T23:52:51Z) - Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging [24.94466716276423]
We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
arXiv Detail & Related papers (2022-08-04T04:37:35Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - PAS-MEF: Multi-exposure image fusion based on principal component
analysis, adaptive well-exposedness and saliency map [0.0]
With regular low dynamic range (LDR) capture/display devices, significant details may not be preserved in images due to the huge dynamic range of natural scenes.
This study proposes an efficient multi-exposure fusion (MEF) approach with a simple yet effective weight extraction method.
Experimental comparisons with existing techniques demonstrate that the proposed method produces very strong statistical and visual results.
arXiv Detail & Related papers (2021-05-25T10:22:43Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.