TSN-CA: A Two-Stage Network with Channel Attention for Low-Light Image
Enhancement
- URL: http://arxiv.org/abs/2110.02477v1
- Date: Wed, 6 Oct 2021 03:20:18 GMT
- Title: TSN-CA: A Two-Stage Network with Channel Attention for Low-Light Image
Enhancement
- Authors: Xinxu Wei, Xianshi Zhang, Shisen Wang, Yanlin Huang, and Yongjie Li
- Abstract summary: We propose a Two-Stage Network with Channel Attention (denoted as TSN-CA) to enhance the brightness of the low-light image.
We conduct extensive experiments to demonstrate that our method achieves excellent effect on brightness enhancement as well as denoising, details preservation and halo artifacts elimination.
- Score: 11.738203047278848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement is a challenging low-level computer vision task
because after we enhance the brightness of the image, we have to deal with
amplified noise, color distortion, detail loss, blurred edges, shadow blocks
and halo artifacts. In this paper, we propose a Two-Stage Network with Channel
Attention (denoted as TSN-CA) to enhance the brightness of the low-light image
and restore the enhanced images from various kinds of degradation. In the first
stage, we enhance the brightness of the low-light image in HSV space and use
the information of H and S channels to help the recovery of details in V
channel. In the second stage, we integrate Channel Attention (CA) mechanism
into the skip connection of U-Net in order to restore the brightness-enhanced
image from severe kinds of degradation in RGB space. We train and evaluate the
performance of our proposed model on the LOL real-world and synthetic datasets.
In addition, we test our model on several other commonly used datasets without
Ground-Truth. We conduct extensive experiments to demonstrate that our method
achieves excellent effect on brightness enhancement as well as denoising,
details preservation and halo artifacts elimination. Our method outperforms
many other state-of-the-art methods qualitatively and quantitatively.
Related papers
- LTCF-Net: A Transformer-Enhanced Dual-Channel Fourier Framework for Low-Light Image Restoration [1.049712834719005]
We introduce LTCF-Net, a novel network architecture designed for enhancing low-light images.
Our approach utilizes two color spaces - LAB and YUV - to efficiently separate and process color information.
Our model incorporates the Transformer architecture to comprehensively understand image content.
arXiv Detail & Related papers (2024-11-24T07:21:17Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Division Gets Better: Learning Brightness-Aware and Detail-Sensitive
Representations for Low-Light Image Enhancement [10.899693396348171]
LCDBNet is composed of two branches, namely luminance adjustment network (LAN) and chrominance restoration network (CRN)
LAN takes responsibility for learning brightness-aware features leveraging long-range dependency and local attention correlation.
CRN concentrates on learning detail-sensitive features via multi-level wavelet decomposition.
Finally, a fusion network is designed to blend their learned features to produce visually impressive images.
arXiv Detail & Related papers (2023-07-18T09:52:48Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - DA-DRN: Degradation-Aware Deep Retinex Network for Low-Light Image
Enhancement [14.75902042351609]
We propose a Degradation-Aware Deep Retinex Network (denoted as DA-DRN) for low-light image enhancement and tackle the above degradation.
Based on Retinex Theory, the decomposition net in our model can decompose low-light images into reflectance and illumination maps.
We conduct extensive experiments to demonstrate that our approach achieves a promising effect with good rubustness and generalization.
arXiv Detail & Related papers (2021-10-05T03:53:52Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.