Interpretable Unsupervised Joint Denoising and Enhancement for Real-World low-light Scenarios
- URL: http://arxiv.org/abs/2503.14535v1
- Date: Mon, 17 Mar 2025 12:08:52 GMT
- Title: Interpretable Unsupervised Joint Denoising and Enhancement for Real-World low-light Scenarios
- Authors: Huaqiu Li, Xiaowan Hu, Haoqian Wang,
- Abstract summary: We propose an interpretable, zero-reference joint denoising and low-light enhancement framework tailored for real-world scenarios.<n>Our method derives a training strategy based on paired sub-images with varying illumination and noise levels.<n>In the backbone network design, we develop retinal decomposition network guided by implicit degradation representation mechanisms.
- Score: 18.622785498617223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world low-light images often suffer from complex degradations such as local overexposure, low brightness, noise, and uneven illumination. Supervised methods tend to overfit to specific scenarios, while unsupervised methods, though better at generalization, struggle to model these degradations due to the lack of reference images. To address this issue, we propose an interpretable, zero-reference joint denoising and low-light enhancement framework tailored for real-world scenarios. Our method derives a training strategy based on paired sub-images with varying illumination and noise levels, grounded in physical imaging principles and retinex theory. Additionally, we leverage the Discrete Cosine Transform (DCT) to perform frequency domain decomposition in the sRGB space, and introduce an implicit-guided hybrid representation strategy that effectively separates intricate compounded degradations. In the backbone network design, we develop retinal decomposition network guided by implicit degradation representation mechanisms. Extensive experiments demonstrate the superiority of our method. Code will be available at https://github.com/huaqlili/unsupervised-light-enhance-ICLR2025.
Related papers
- Nonlocal Retinex-Based Variational Model and its Deep Unfolding Twin for Low-Light Image Enhancement [3.174882428337821]
We propose a variational method for low-light image enhancement based on the Retinex decomposition.
A color correction pre-processing step is applied to the low-light image, which is then used as the observed input in the decomposition.
We extend the model by introducing its deep unfolding counterpart, in which the operators are replaced with learnable networks.
arXiv Detail & Related papers (2025-04-10T14:48:26Z) - Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement [71.13353154514418]
Low-light image enhancement, particularly in cross-domain tasks such as mapping from the raw domain to the sRGB domain, remains a significant challenge.<n>We present a novel Mamba scanning mechanism, called RAWMamba, to effectively handle raw images with different CFAs.<n>We also present a Retinex Decomposition Module (RDM) grounded in Retinex prior, which decouples illumination from reflectance to facilitate more effective denoising and automatic non-linear exposure correction.
arXiv Detail & Related papers (2024-09-11T06:12:03Z) - LightenDiffusion: Unsupervised Low-Light Image Enhancement with Latent-Retinex Diffusion Models [39.28266945709169]
We propose a diffusion-based unsupervised framework that incorporates physically explainable Retinex theory with diffusion models for low-light image enhancement.
Experiments on publicly available real-world benchmarks show that the proposed LightenDiffusion outperforms state-of-the-art unsupervised competitors.
arXiv Detail & Related papers (2024-07-12T02:54:43Z) - DPEC: Dual-Path Error Compensation Method for Enhanced Low-Light Image Clarity [2.8161423494191222]
We propose the Dual-Path Error Compensation (DPEC) method to improve image quality under low-light conditions.<n>DPEC incorporates precise pixel-level error estimation to capture subtle differences and an independent denoising mechanism to prevent noise amplification.<n> Comprehensive quantitative and qualitative experimental results demonstrate that our algorithm significantly outperforms state-of-the-art methods in low-light image enhancement.
arXiv Detail & Related papers (2024-06-28T08:21:49Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Unsupervised Low-Light Image Enhancement via Histogram Equalization
Prior [40.61944814314655]
We propose an unsupervised low-light image enhancement method based on an effective prior histogram termed equalization prior (HEP)
We introduce a Noise Disentanglement Module (NDM) to disentangle the noise and content in the reflectance maps with the reliable aid of unpaired clean images.
Our method performs favorably against the state-of-the-art unsupervised low-light enhancement algorithms and even matches the state-of-the-art supervised algorithms.
arXiv Detail & Related papers (2021-12-03T07:51:08Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.