Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising
- URL: http://arxiv.org/abs/2210.00545v1
- Date: Sun, 2 Oct 2022 14:57:23 GMT
- Title: Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising
- Authors: Jiahuan Ren, Zhao Zhang, Richang Hong, Mingliang Xu, Yi Yang,
Shuicheng Yan
- Abstract summary: Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
- Score: 125.56062454927755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images collected in real-world low-light environment usually suffer from
lower visibility and heavier noise, due to the insufficient light or hardware
limitation. While existing low-light image enhancement (LLIE) methods basically
ignored the noise interference and mainly focus on refining the illumination of
the low-light images based on benchmarked noise-negligible datasets. Such
operations will make them inept for the real-world LLIE (RLLIE) with heavy
noise, and result in speckle noise and blur in the enhanced images. Although
several LLIE methods considered the noise in low-light image, they are trained
on the raw data and hence cannot be used for sRGB images, since the domains of
data are different and lack of expertise or unknown protocols. In this paper,
we clearly consider the task of seeing through the noisy dark in sRGB color
space, and propose a novel end-to-end method termed Real-world Low-light
Enhancement & Denoising Network (RLED-Net). Since natural images can usually be
characterized by low-rank subspaces in which the redundant information and
noise can be removed, we design a Latent Subspace Reconstruction Block (LSRB)
for feature extraction and denoising. To reduce the loss of global feature
(e.g., color/shape information) and extract more accurate local features (e.g.,
edge/texture information), we also present a basic layer with two branches,
called Cross-channel & Shift-window Transformer (CST). Based on the CST, we
further present a new backbone to design a U-structure Network (CSTNet) for
deep feature recovery, and also design a Feature Refine Block (FRB) to refine
the final features. Extensive experiments on real noisy images and public
databases verified the effectiveness of our RLED-Net for both RLLIE and
denoising.
Related papers
- NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset [53.79524776100983]
Leveraging near-infrared (NIR) images to assist visible RGB image denoising shows the potential to address this issue.
Existing works still struggle with taking advantage of NIR information effectively for real-world image denoising.
We propose an efficient Selective Fusion Module (SFM), which can be plug-and-played into the advanced denoising networks.
arXiv Detail & Related papers (2024-04-12T14:54:26Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Instance Segmentation in the Dark [43.85818645776587]
We take a deep look at instance segmentation in the dark and introduce several techniques that substantially boost the low-light inference accuracy.
We propose a novel learning method that relies on an adaptive weighted downsampling layer, a smooth-oriented convolutional block, and disturbance suppression learning.
We capture a real-world low-light instance segmentation dataset comprising over two thousand paired low/normal-light images with instance-level pixel-wise annotations.
arXiv Detail & Related papers (2023-04-27T16:02:29Z) - Spatially Adaptive Self-Supervised Learning for Real-World Image
Denoising [73.71324390085714]
We propose a novel perspective to solve the problem of real-world sRGB image denoising.
We take into account the respective characteristics of flat and textured regions in noisy images, and construct supervisions for them separately.
We present a locally aware network (LAN) to meet the requirement, while LAN itself is supervised with the output of BNN.
arXiv Detail & Related papers (2023-03-27T06:18:20Z) - Low-light Image Enhancement via Breaking Down the Darkness [8.707025631892202]
This paper presents a novel framework inspired by the divide-and-rule principle.
We propose to convert an image from the RGB space into a luminance-chrominance one.
An adjustable noise suppression network is designed to eliminate noise in the brightened luminance.
The enhanced luminance further serves as guidance for the chrominance mapper to generate realistic colors.
arXiv Detail & Related papers (2021-11-30T16:50:59Z) - Adaptive Unfolding Total Variation Network for Low-Light Image
Enhancement [6.531546527140475]
Most existing enhancing algorithms in sRGB space only focus on the low visibility problem or suppress the noise under a hypothetical noise level.
We propose an adaptive unfolding total variation network (UTVNet) to approximate the noise level from the real sRGB low-light image.
Experiments on real-world low-light images clearly demonstrate the superior performance of UTVNet over state-of-the-art methods.
arXiv Detail & Related papers (2021-10-03T11:22:17Z) - CERL: A Unified Optimization Framework for Light Enhancement with
Realistic Noise [81.47026986488638]
Low-light images captured in the real world are inevitably corrupted by sensor noise.
Existing light enhancement methods either overlook the important impact of real-world noise during enhancement, or treat noise removal as a separate pre- or post-processing step.
We present Coordinated Enhancement for Real-world Low-light Noisy Images (CERL), that seamlessly integrates light enhancement and noise suppression parts into a unified and physics-grounded framework.
arXiv Detail & Related papers (2021-08-01T15:31:15Z) - BLNet: A Fast Deep Learning Framework for Low-Light Image Enhancement
with Noise Removal and Color Restoration [14.75902042351609]
We propose a very fast deep learning framework called Bringing the Lightness (denoted as BLNet)
Based on Retinex Theory, the decomposition net in our model can decompose low-light images into reflectance and illumination.
We conduct extensive experiments to demonstrate that our approach achieves a promising effect with good rubustness and generalization.
arXiv Detail & Related papers (2021-06-30T10:06:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.