DarkDiff: Advancing Low-Light Raw Enhancement by Retasking Diffusion Models for Camera ISP
- URL: http://arxiv.org/abs/2505.23743v1
- Date: Thu, 29 May 2025 17:58:48 GMT
- Title: DarkDiff: Advancing Low-Light Raw Enhancement by Retasking Diffusion Models for Camera ISP
- Authors: Amber Yijia Zheng, Yu Zhang, Jun Hu, Raymond A. Yeh, Chen Chen,
- Abstract summary: We introduce a novel framework to enhance low-light raw images by retasking pre-trained generative diffusion models with the camera ISP.<n>Our method outperforms the state-of-the-art in perceptual quality across three challenging low-light raw image benchmarks.
- Score: 17.881385252833077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-quality photography in extreme low-light conditions is challenging but impactful for digital cameras. With advanced computing hardware, traditional camera image signal processor (ISP) algorithms are gradually being replaced by efficient deep networks that enhance noisy raw images more intelligently. However, existing regression-based models often minimize pixel errors and result in oversmoothing of low-light photos or deep shadows. Recent work has attempted to address this limitation by training a diffusion model from scratch, yet those models still struggle to recover sharp image details and accurate colors. We introduce a novel framework to enhance low-light raw images by retasking pre-trained generative diffusion models with the camera ISP. Extensive experiments demonstrate that our method outperforms the state-of-the-art in perceptual quality across three challenging low-light raw image benchmarks.
Related papers
- Rethinking High-speed Image Reconstruction Framework with Spike Camera [48.627095354244204]
Spike cameras generate continuous spike streams to capture high-speed scenes with lower bandwidth and higher dynamic range than traditional RGB cameras.<n>We introduce a novel spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional training paradigms.<n>Our experiments on real-world low-light datasets demonstrate that SpikeCLIP significantly enhances texture details and the luminance balance of recovered images.
arXiv Detail & Related papers (2025-01-08T13:00:17Z) - DiffuseRAW: End-to-End Generative RAW Image Processing for Low-Light Images [5.439020425819001]
We develop a new generative ISP that relies on fine-tuning latent diffusion models on RAW images.
We evaluate our approach on popular end-to-end low-light datasets for which we see promising results.
arXiv Detail & Related papers (2023-12-13T03:39:05Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.<n>Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.<n>We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Diffusion in the Dark: A Diffusion Model for Low-Light Text Recognition [78.50328335703914]
Diffusion in the Dark (DiD) is a diffusion model for low-light image reconstruction for text recognition.
We demonstrate that DiD, without any task-specific optimization, can outperform SOTA low-light methods in low-light text recognition on real images.
arXiv Detail & Related papers (2023-03-07T23:52:51Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Physics-based Noise Modeling for Extreme Low-light Photography [63.65570751728917]
We study the noise statistics in the imaging pipeline of CMOS photosensors.
We formulate a comprehensive noise model that can accurately characterize the real noise structures.
Our noise model can be used to synthesize realistic training data for learning-based low-light denoising algorithms.
arXiv Detail & Related papers (2021-08-04T16:36:29Z) - Burst Photography for Learning to Enhance Extremely Dark Images [19.85860245798819]
In this paper, we aim to leverage burst photography to boost the performance and obtain much sharper and more accurate RGB images from extremely dark raw images.
The backbone of our proposed framework is a novel coarse-to-fine network architecture that generates high-quality outputs progressively.
Our experiments demonstrate that our approach leads to perceptually more pleasing results than the state-of-the-art methods by producing more detailed and considerably higher quality images.
arXiv Detail & Related papers (2020-06-17T13:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.