A Lightweight Low-Light Image Enhancement Network via Channel Prior and Gamma Correction
- URL: http://arxiv.org/abs/2402.18147v2
- Date: Wed, 10 Jul 2024 18:29:14 GMT
- Title: A Lightweight Low-Light Image Enhancement Network via Channel Prior and Gamma Correction
- Authors: Shyang-En Weng, Shaou-Gang Miaou, Ricky Christanto,
- Abstract summary: Low-light image enhancement (LLIE) refers to image enhancement technology tailored to handle low-light scenes.
We introduce CPGA-Net, an innovative LLIE network that combines dark/bright channel priors and gamma correction via deep learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human vision relies heavily on available ambient light to perceive objects. Low-light scenes pose two distinct challenges: information loss due to insufficient illumination and undesirable brightness shifts. Low-light image enhancement (LLIE) refers to image enhancement technology tailored to handle this scenario. We introduce CPGA-Net, an innovative LLIE network that combines dark/bright channel priors and gamma correction via deep learning and integrates features inspired by the Atmospheric Scattering Model and the Retinex Theory. This approach combines the use of traditional and deep learning methodologies, designed within a simple yet efficient architectural framework that focuses on essential feature extraction. The resulting CPGA-Net is a lightweight network with only 0.025 million parameters and 0.030 seconds for inference time, yet it achieves superior performance over existing LLIE methods on both objective and subjective evaluation criteria. Furthermore, we utilized knowledge distillation with explainable factors and proposed an efficient version that achieves 0.018 million parameters and 0.006 seconds for inference time. The proposed approaches inject new solution ideas into LLIE, providing practical applications in challenging low-light scenarios.
Related papers
- Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors [38.96909959677438]
Low-light image enhancement (LIE) aims at precisely and efficiently recovering an image degraded in poor illumination environments.
Recent advanced LIE techniques are using deep neural networks, which require lots of low-normal light image pairs, network parameters, and computational resources.
We devise a novel unsupervised LIE framework based on diffusion priors and lookup tables to achieve efficient low-light image recovery.
arXiv Detail & Related papers (2024-09-27T16:37:27Z) - EFLNet: Enhancing Feature Learning for Infrared Small Target Detection [20.546186772828555]
Single-frame infrared small target detection is considered to be a challenging task.
Due to the extreme imbalance between target and background, bounding box regression is extremely sensitive to infrared small target.
We propose an enhancing feature learning network (EFLNet) to address these problems.
arXiv Detail & Related papers (2023-07-27T09:23:22Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - LEDNet: Joint Low-light Enhancement and Deblurring in the Dark [100.24389251273611]
We present the first large-scale dataset for joint low-light enhancement and deblurring.
LOL-Blur contains 12,000 low-blur/normal-sharp pairs with diverse darkness and motion blurs in different scenarios.
We also present an effective network, named LEDNet, to perform joint low-light enhancement and deblurring.
arXiv Detail & Related papers (2022-02-07T17:44:05Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Learning with Nested Scene Modeling and Cooperative Architecture Search
for Low-Light Vision [95.45256938467237]
Images captured from low-light scenes often suffer from severe degradations.
Deep learning methods have been proposed to enhance the visual quality of low-light images.
It is still challenging to extend these enhancement techniques to handle other Low-Light Vision applications.
arXiv Detail & Related papers (2021-12-09T06:08:31Z) - SurroundNet: Towards Effective Low-Light Image Enhancement [43.99545410176845]
We present a novel SurroundNet which only involves less than 150$K$ parameters and achieves very competitive performance.
The proposed network comprises several Adaptive Retinex Blocks (ARBlock), which can be viewed as a novel extension of Single Scale Retinex in feature space.
We also introduce a Low-Exposure Denoiser (LED) to smooth the low-light image before the enhancement.
arXiv Detail & Related papers (2021-10-11T09:10:19Z) - Physically Inspired Dense Fusion Networks for Relighting [45.66699760138863]
We propose a model which enriches neural networks with physical insight.
Our method generates the relighted image with new illumination settings via two different strategies.
We show that our proposal can outperform many state-of-the-art methods in terms of well-known fidelity metrics and perceptual loss.
arXiv Detail & Related papers (2021-05-05T17:33:45Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - A Single Stream Network for Robust and Real-time RGB-D Salient Object
Detection [89.88222217065858]
We design a single stream network to use the depth map to guide early fusion and middle fusion between RGB and depth.
This model is 55.5% lighter than the current lightest model and runs at a real-time speed of 32 FPS when processing a $384 times 384$ image.
arXiv Detail & Related papers (2020-07-14T04:40:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.