Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the
Noise Model
- URL: http://arxiv.org/abs/2308.03448v2
- Date: Mon, 25 Dec 2023 07:26:51 GMT
- Title: Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the
Noise Model
- Authors: Xin Jin, Jia-Wen Xiao, Ling-Hao Han, Chunle Guo, Xialei Liu, Chongyi
Li, Ming-Ming Cheng
- Abstract summary: We introduce Lighting Every Darkness (LED), which is effective regardless of the digital gain or the camera sensor.
LED eliminates the need for explicit noise model calibration, instead utilizing an implicit fine-tuning process that allows quick deployment and requires minimal data.
LED also allows researchers to focus more on deep learning advancements while still utilizing sensor engineering benefits.
- Score: 83.9497193551511
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Explicit calibration-based methods have dominated RAW image denoising under
extremely low-light environments. However, these methods are impeded by several
critical limitations: a) the explicit calibration process is both labor- and
time-intensive, b) challenge exists in transferring denoisers across different
camera models, and c) the disparity between synthetic and real noise is
exacerbated by digital gain. To address these issues, we introduce a
groundbreaking pipeline named Lighting Every Darkness (LED), which is effective
regardless of the digital gain or the camera sensor. LED eliminates the need
for explicit noise model calibration, instead utilizing an implicit fine-tuning
process that allows quick deployment and requires minimal data. Structural
modifications are also included to reduce the discrepancy between synthetic and
real noise without extra computational demands. Our method surpasses existing
methods in various camera models, including new ones not in public datasets,
with just a few pairs per digital gain and only 0.5% of the typical iterations.
Furthermore, LED also allows researchers to focus more on deep learning
advancements while still utilizing sensor engineering benefits. Code and
related materials can be found in https://srameo.github.io/projects/led-iccv23/ .
Related papers
- Combining Pre- and Post-Demosaicking Noise Removal for RAW Video [2.772895608190934]
Denoising is one of the fundamental steps of the processing pipeline that converts data captured by a camera sensor into a display-ready image or video.
We propose a self-similarity-based denoising scheme that weights both a pre- and a post-demosaicking denoiser for Bayer-patterned CFA video data.
We show that a balance between the two leads to better image quality, and we empirically find that higher noise levels benefit from a higher influence pre-demosaicking.
arXiv Detail & Related papers (2024-10-03T15:20:19Z) - Towards General Low-Light Raw Noise Synthesis and Modeling [37.87312467017369]
We introduce a new perspective to synthesize the signal-independent noise by a generative model.
Specifically, we synthesize the signal-dependent and signal-independent noise in a physics- and learning-based manner.
In this way, our method can be considered as a general model, that is, it can simultaneously learn different noise characteristics for different ISO levels.
arXiv Detail & Related papers (2023-07-31T09:10:10Z) - Realistic Noise Synthesis with Diffusion Models [68.48859665320828]
Deep image denoising models often rely on large amount of training data for the high quality performance.
We propose a novel method that synthesizes realistic noise using diffusion models, namely Realistic Noise Synthesize Diffusor (RNSD)
RNSD can incorporate guided multiscale content, such as more realistic noise with spatial correlations can be generated at multiple frequencies.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Modeling sRGB Camera Noise with Normalizing Flows [35.29066692454865]
We propose a new sRGB-domain noise model based on normalizing flows that is capable of learning the complex noise distribution found in sRGB images under various ISO levels.
Our normalizing flows-based approach outperforms other models by a large margin in noise modeling and synthesis tasks.
arXiv Detail & Related papers (2022-06-02T00:56:34Z) - Rethinking Noise Synthesis and Modeling in Raw Denoising [75.55136662685341]
We introduce a new perspective to synthesize noise by directly sampling from the sensor's real noise.
It inherently generates accurate raw image noise for different camera sensors.
arXiv Detail & Related papers (2021-10-10T10:45:24Z) - Physics-based Noise Modeling for Extreme Low-light Photography [63.65570751728917]
We study the noise statistics in the imaging pipeline of CMOS photosensors.
We formulate a comprehensive noise model that can accurately characterize the real noise structures.
Our noise model can be used to synthesize realistic training data for learning-based low-light denoising algorithms.
arXiv Detail & Related papers (2021-08-04T16:36:29Z) - Designing a Practical Degradation Model for Deep Blind Image
Super-Resolution [134.9023380383406]
Single image super-resolution (SISR) methods would not perform well if the assumed degradation model deviates from those in real images.
This paper proposes to design a more complex but practical degradation model that consists of randomly shuffled blur, downsampling and noise degradations.
arXiv Detail & Related papers (2021-03-25T17:40:53Z) - Learning Model-Blind Temporal Denoisers without Ground Truths [46.778450578529814]
Denoisers trained with synthetic data often fail to cope with the diversity of unknown noises.
Previous image-based method leads to noise overfitting if directly applied to video denoisers.
We propose a general framework for video denoising networks that successfully addresses these challenges.
arXiv Detail & Related papers (2020-07-07T07:19:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.