Learning Model-Blind Temporal Denoisers without Ground Truths
- URL: http://arxiv.org/abs/2007.03241v2
- Date: Wed, 31 Mar 2021 13:51:22 GMT
- Title: Learning Model-Blind Temporal Denoisers without Ground Truths
- Authors: Yanghao Li, Bichuan Guo, Jiangtao Wen, Zhen Xia, Shan Liu, Yuxing Han
- Abstract summary: Denoisers trained with synthetic data often fail to cope with the diversity of unknown noises.
Previous image-based method leads to noise overfitting if directly applied to video denoisers.
We propose a general framework for video denoising networks that successfully addresses these challenges.
- Score: 46.778450578529814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Denoisers trained with synthetic data often fail to cope with the diversity
of unknown noises, giving way to methods that can adapt to existing noise
without knowing its ground truth. Previous image-based method leads to noise
overfitting if directly applied to video denoisers, and has inadequate temporal
information management especially in terms of occlusion and lighting variation,
which considerably hinders its denoising performance. In this paper, we propose
a general framework for video denoising networks that successfully addresses
these challenges. A novel twin sampler assembles training data by decoupling
inputs from targets without altering semantics, which not only effectively
solves the noise overfitting problem, but also generates better occlusion masks
efficiently by checking optical flow consistency. An online denoising scheme
and a warping loss regularizer are employed for better temporal alignment.
Lighting variation is quantified based on the local similarity of aligned
frames. Our method consistently outperforms the prior art by 0.6-3.2dB PSNR on
multiple noises, datasets and network architectures. State-of-the-art results
on reducing model-blind video noises are achieved. Extensive ablation studies
are conducted to demonstrate the significance of each technical components.
Related papers
- Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Blind2Sound: Self-Supervised Image Denoising without Residual Noise [5.192255321684027]
Self-supervised blind denoising for Poisson-Gaussian noise remains a challenging task.
We propose Blind2Sound, a simple yet effective approach to overcome residual noise in denoised images.
arXiv Detail & Related papers (2023-03-09T11:21:59Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Image Denoising using Attention-Residual Convolutional Neural Networks [0.0]
We propose a new learning-based non-blind denoising technique named Attention Residual Convolutional Neural Network (ARCNN) and its extension to blind denoising named Flexible Attention Residual Convolutional Neural Network (FARCNN)
ARCNN achieved an overall average PSNR results of around 0.44dB and 0.96dB for Gaussian and Poisson denoising, respectively FARCNN presented very consistent results, even with slightly worsen performance compared to ARCNN.
arXiv Detail & Related papers (2021-01-19T16:37:57Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Noise2Kernel: Adaptive Self-Supervised Blind Denoising using a Dilated
Convolutional Kernel Architecture [3.796436257221662]
We propose a dilated convolutional network that satisfies an invariant property, allowing efficient kernel-based training without random masking.
We also propose an adaptive self-supervision loss to circumvent the requirement of zero-mean constraint, which is specifically effective in removing salt-and-pepper or hybrid noise.
arXiv Detail & Related papers (2020-12-07T12:13:17Z) - Adaptive noise imitation for image denoising [58.21456707617451]
We develop a new textbfadaptive noise imitation (ADANI) algorithm that can synthesize noisy data from naturally noisy images.
To produce realistic noise, a noise generator takes unpaired noisy/clean images as input, where the noisy image is a guide for noise generation.
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
arXiv Detail & Related papers (2020-11-30T02:49:36Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.