LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field
Images
- URL: http://arxiv.org/abs/2209.02197v1
- Date: Tue, 6 Sep 2022 03:23:58 GMT
- Title: LRT: An Efficient Low-Light Restoration Transformer for Dark Light Field
Images
- Authors: Shansi Zhang and Nan Meng and Edmund Y. Lam
- Abstract summary: Recent learning-based methods for low-light enhancement have their own disadvantages.
We propose an efficient Low-light Restoration Transformer (LRT) for LF images.
We show that our method can achieve superior performance on the restoration of extremely low-light and noisy LF images.
- Score: 9.926231893220063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light field (LF) images with the multi-view property have many applications,
which can be severely affected by the low-light imaging. Recent learning-based
methods for low-light enhancement have their own disadvantages, such as no
noise suppression, complex training process and poor performance in extremely
low-light conditions. Targeted on solving these deficiencies while fully
utilizing the multi-view information, we propose an efficient Low-light
Restoration Transformer (LRT) for LF images, with multiple heads to perform
specific intermediate tasks, including denoising, luminance adjustment,
refinement and detail enhancement, within a single network, achieving
progressive restoration from small scale to full scale. We design an angular
transformer block with a view-token scheme to model the global angular
relationship efficiently, and a multi-scale window-based transformer block to
encode the multi-scale local and global spatial information. To solve the
problem of insufficient training data, we formulate a synthesis pipeline by
simulating the major noise with the estimated noise parameters of LF camera.
Experimental results demonstrate that our method can achieve superior
performance on the restoration of extremely low-light and noisy LF images with
high efficiency.
Related papers
- A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Simplifying Low-Light Image Enhancement Networks with Relative Loss
Functions [14.63586364951471]
We introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions to make learning easier in low-light image enhancement.
We first recognize the challenges of the need for a large receptive field to obtain global contrast.
Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges.
arXiv Detail & Related papers (2023-04-06T10:05:54Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z) - Adaptive Unfolding Total Variation Network for Low-Light Image
Enhancement [6.531546527140475]
Most existing enhancing algorithms in sRGB space only focus on the low visibility problem or suppress the noise under a hypothetical noise level.
We propose an adaptive unfolding total variation network (UTVNet) to approximate the noise level from the real sRGB low-light image.
Experiments on real-world low-light images clearly demonstrate the superior performance of UTVNet over state-of-the-art methods.
arXiv Detail & Related papers (2021-10-03T11:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.