SurroundNet: Towards Effective Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2110.05098v1
- Date: Mon, 11 Oct 2021 09:10:19 GMT
- Title: SurroundNet: Towards Effective Low-Light Image Enhancement
- Authors: Fei Zhou and Xin Sun and Junyu Dong and Haoran Zhao and Xiao Xiang Zhu
- Abstract summary: We present a novel SurroundNet which only involves less than 150$K$ parameters and achieves very competitive performance.
The proposed network comprises several Adaptive Retinex Blocks (ARBlock), which can be viewed as a novel extension of Single Scale Retinex in feature space.
We also introduce a Low-Exposure Denoiser (LED) to smooth the low-light image before the enhancement.
- Score: 43.99545410176845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although Convolution Neural Networks (CNNs) has made substantial progress in
the low-light image enhancement task, one critical problem of CNNs is the
paradox of model complexity and performance. This paper presents a novel
SurroundNet which only involves less than 150$K$ parameters (about 80-98
percent size reduction compared to SOTAs) and achieves very competitive
performance. The proposed network comprises several Adaptive Retinex Blocks
(ARBlock), which can be viewed as a novel extension of Single Scale Retinex in
feature space. The core of our ARBlock is an efficient illumination estimation
function called Adaptive Surround Function (ASF). It can be regarded as a
general form of surround functions and be implemented by convolution layers. In
addition, we also introduce a Low-Exposure Denoiser (LED) to smooth the
low-light image before the enhancement. We evaluate the proposed method on the
real-world low-light dataset. Experimental results demonstrate that the
superiority of our submitted SurroundNet in both performance and network
parameters against State-of-the-Art low-light image enhancement methods. Code
is available at https: github.com/ouc-ocean-group/SurroundNet.
Related papers
- KAN See In the Dark [2.9873893715462185]
Existing low-light image enhancement methods are difficult to fit the complex nonlinear relationship between normal and low-light images due to uneven illumination and noise effects.
The recently proposed Kolmogorov-Arnold networks (KANs) feature spline-based convolutional layers and learnable activation functions, which can effectively capture nonlinear dependencies.
In this paper, we design a KAN-Block based on KANs and innovatively apply it to low-light image enhancement. This method effectively alleviates the limitations of current methods constrained by linear network structures and lack of interpretability.
arXiv Detail & Related papers (2024-09-05T10:41:17Z) - A Lightweight Low-Light Image Enhancement Network via Channel Prior and Gamma Correction [0.0]
Low-light image enhancement (LLIE) refers to image enhancement technology tailored to handle low-light scenes.
We introduce CPGA-Net, an innovative LLIE network that combines dark/bright channel priors and gamma correction via deep learning.
arXiv Detail & Related papers (2024-02-28T08:18:20Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network [7.755223662467257]
We propose a novel Real-low to Real-normal Network for low-light image enhancement, dubbed R2RNet.
Unlike most previous methods trained on synthetic images, we collect the first Large-Scale Real-World paired low/normal-light images dataset.
Our method can properly improve the contrast and suppress noise simultaneously.
arXiv Detail & Related papers (2021-06-28T09:33:13Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z) - Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in
Image Classification [46.885260723836865]
Deep convolutional neural networks (CNNs) generally improve when fueled with high resolution images.
Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs efficient image classification.
Our framework is general and flexible as it is compatible with most of the state-of-the-art light-weighted CNNs.
arXiv Detail & Related papers (2020-10-11T17:55:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.