Joint Demosaicking and Denoising in the Wild: The Case of Training Under
Ground Truth Uncertainty
- URL: http://arxiv.org/abs/2101.04442v1
- Date: Tue, 12 Jan 2021 12:33:41 GMT
- Title: Joint Demosaicking and Denoising in the Wild: The Case of Training Under
Ground Truth Uncertainty
- Authors: Jierun Chen, Song Wen, S.-H. Gary Chan
- Abstract summary: We propose and study Wild-JDD, a novel learning framework for joint demosaicking and denoising in the wild.
In contrast to previous works which generally assume the ground truth of training data is a perfect reflection of the reality, we consider here the more common imperfect case of ground truth uncertainty in the wild.
- Score: 2.0088802641040604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image demosaicking and denoising are the two key fundamental steps in digital
camera pipelines, aiming to reconstruct clean color images from noisy luminance
readings. In this paper, we propose and study Wild-JDD, a novel learning
framework for joint demosaicking and denoising in the wild. In contrast to
previous works which generally assume the ground truth of training data is a
perfect reflection of the reality, we consider here the more common imperfect
case of ground truth uncertainty in the wild. We first illustrate its
manifestation as various kinds of artifacts including zipper effect, color
moire and residual noise. Then we formulate a two-stage data degradation
process to capture such ground truth uncertainty, where a conjugate prior
distribution is imposed upon a base distribution. After that, we derive an
evidence lower bound (ELBO) loss to train a neural network that approximates
the parameters of the conjugate prior distribution conditioned on the degraded
input. Finally, to further enhance the performance for out-of-distribution
input, we design a simple but effective fine-tuning strategy by taking the
input as a weakly informative prior. Taking into account ground truth
uncertainty, Wild-JDD enjoys good interpretability during optimization.
Extensive experiments validate that it outperforms state-of-the-art schemes on
joint demosaicking and denoising tasks on both synthetic and realistic raw
datasets.
Related papers
- Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.
In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.
We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - DPMesh: Exploiting Diffusion Prior for Occluded Human Mesh Recovery [71.6345505427213]
DPMesh is an innovative framework for occluded human mesh recovery.
It capitalizes on the profound diffusion prior about object structure and spatial relationships embedded in a pre-trained text-to-image diffusion model.
arXiv Detail & Related papers (2024-04-01T18:59:13Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Confidence-based Reliable Learning under Dual Noises [46.45663546457154]
Deep neural networks (DNNs) have achieved remarkable success in a variety of computer vision tasks.
Yet, the data collected from the open world are unavoidably polluted by noise, which may significantly undermine the efficacy of the learned models.
Various attempts have been made to reliably train DNNs under data noise, but they separately account for either the noise existing in the labels or that existing in the images.
This work provides a first, unified framework for reliable learning under the joint (image, label)-noise.
arXiv Detail & Related papers (2023-02-10T07:50:34Z) - Linear Combinations of Patches are Unreasonably Effective for Single-Image Denoising [5.893124686141782]
Deep neural networks have revolutionized image denoising in achieving significant accuracy improvements.
To alleviate the requirement to learn image priors externally, single-image methods perform denoising solely based on the analysis of the input noisy image.
This work investigates the effectiveness of linear combinations of patches for denoising under this constraint.
arXiv Detail & Related papers (2022-12-01T10:52:03Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Self-supervision versus synthetic datasets: which is the lesser evil in
the context of video denoising? [11.0189148044343]
Supervised training has led to state-of-the-art results in image and video denoising.
It requires large datasets of noisy-clean pairs that are difficult to obtain.
Some self-supervised frameworks have been proposed for training such denoising networks directly on the noisy data.
arXiv Detail & Related papers (2022-04-25T08:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.