An Effective Image Restorer: Denoising and Luminance Adjustment for
Low-photon-count Imaging
- URL: http://arxiv.org/abs/2110.15715v2
- Date: Tue, 2 Nov 2021 01:56:56 GMT
- Title: An Effective Image Restorer: Denoising and Luminance Adjustment for
Low-photon-count Imaging
- Authors: Shansi Zhang and Edmund Y. Lam
- Abstract summary: raw image restoration under low-photon-count conditions by simulating the imaging of quanta image sensor (QIS)
We develop a lightweight framework, which consists of a multi-level pyramid denoising network (MPDNet) and a luminance adjustment (LA) module to achieve separate denoising and luminance enhancement.
Our image restorer can achieve superior performance on the degraded images with various photon levels by suppressing noise and recovering luminance and color effectively.
- Score: 6.358214877782411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imaging under photon-scarce situations introduces challenges to many
applications as the captured images are with low signal-to-noise ratio and poor
luminance. In this paper, we investigate the raw image restoration under
low-photon-count conditions by simulating the imaging of quanta image sensor
(QIS). We develop a lightweight framework, which consists of a multi-level
pyramid denoising network (MPDNet) and a luminance adjustment (LA) module to
achieve separate denoising and luminance enhancement. The main component of our
framework is the multi-skip attention residual block (MARB), which integrates
multi-scale feature fusion and attention mechanism for better feature
representation. Our MPDNet adopts the idea of Laplacian pyramid to learn the
small-scale noise map and larger-scale high-frequency details at different
levels, and feature extractions are conducted on the multi-scale input images
to encode richer contextual information. Our LA module enhances the luminance
of the denoised image by estimating its illumination, which can better avoid
color distortion. Extensive experimental results have demonstrated that our
image restorer can achieve superior performance on the degraded images with
various photon levels by suppressing noise and recovering luminance and color
effectively.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field
Images [9.926231893220063]
Recent learning-based methods for low-light enhancement have their own disadvantages.
We propose an efficient Low-light Restoration Transformer (LRT) for LF images.
We show that our method can achieve superior performance on the restoration of extremely low-light and noisy LF images.
arXiv Detail & Related papers (2022-09-06T03:23:58Z) - Lightweight HDR Camera ISP for Robust Perception in Dynamic Illumination
Conditions via Fourier Adversarial Networks [35.532434169432776]
We propose a lightweight two-stage image enhancement algorithm sequentially balancing illumination and noise removal.
We also propose a Fourier spectrum-based adversarial framework (AFNet) for consistent image enhancement under varying illumination conditions.
Based on quantitative and qualitative evaluations, we also examine the practicality and effects of image enhancement techniques on the performance of common perception tasks.
arXiv Detail & Related papers (2022-04-04T18:48:51Z) - BLNet: A Fast Deep Learning Framework for Low-Light Image Enhancement
with Noise Removal and Color Restoration [14.75902042351609]
We propose a very fast deep learning framework called Bringing the Lightness (denoted as BLNet)
Based on Retinex Theory, the decomposition net in our model can decompose low-light images into reflectance and illumination.
We conduct extensive experiments to demonstrate that our approach achieves a promising effect with good rubustness and generalization.
arXiv Detail & Related papers (2021-06-30T10:06:16Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Self-supervised Low Light Image Enhancement and Denoising [8.583910695494726]
This paper proposes a self-supervised low light image enhancement method based on deep learning.
It can improve the image contrast and reduce noise at the same time to avoid the blur caused by pre-/post-denoising.
arXiv Detail & Related papers (2021-03-01T08:05:02Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.