Fully Unsupervised Diversity Denoising with Convolutional Variational
Autoencoders
- URL: http://arxiv.org/abs/2006.06072v2
- Date: Mon, 1 Mar 2021 12:28:08 GMT
- Title: Fully Unsupervised Diversity Denoising with Convolutional Variational
Autoencoders
- Authors: Mangal Prakash, Alexander Krull, Florian Jug
- Abstract summary: We propose DivNoising, a denoising approach based on fully convolutional variational autoencoders (VAEs)
First we introduce a principled way of formulating the unsupervised denoising problem within the VAE framework by explicitly incorporating imaging noise models into the decoder.
We show that such a noise model can either be measured, bootstrapped from noisy data, or co-learned during training.
- Score: 81.30960319178725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning based methods have emerged as the indisputable leaders for
virtually all image restoration tasks. Especially in the domain of microscopy
images, various content-aware image restoration (CARE) approaches are now used
to improve the interpretability of acquired data. Naturally, there are
limitations to what can be restored in corrupted images, and like for all
inverse problems, many potential solutions exist, and one of them must be
chosen. Here, we propose DivNoising, a denoising approach based on fully
convolutional variational autoencoders (VAEs), overcoming the problem of having
to choose a single solution by predicting a whole distribution of denoised
images. First we introduce a principled way of formulating the unsupervised
denoising problem within the VAE framework by explicitly incorporating imaging
noise models into the decoder. Our approach is fully unsupervised, only
requiring noisy images and a suitable description of the imaging noise
distribution. We show that such a noise model can either be measured,
bootstrapped from noisy data, or co-learned during training. If desired,
consensus predictions can be inferred from a set of DivNoising predictions,
leading to competitive results with other unsupervised methods and, on
occasion, even with the supervised state-of-the-art. DivNoising samples from
the posterior enable a plethora of useful applications. We are (i) showing
denoising results for 13 datasets, (ii) discussing how optical character
recognition (OCR) applications can benefit from diverse predictions, and are
(iii) demonstrating how instance cell segmentation improves when using diverse
DivNoising predictions.
Related papers
- Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Score Priors Guided Deep Variational Inference for Unsupervised
Real-World Single Image Denoising [14.486289176696438]
We propose a score priors-guided deep variational inference, namely ScoreDVI, for practical real-world denoising.
We exploit a Non-$i.i.d$ Gaussian mixture model and variational noise posterior to model the real-world noise.
Our method outperforms other single image-based real-world denoising methods and achieves comparable performance to dataset-based unsupervised methods.
arXiv Detail & Related papers (2023-08-09T03:26:58Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - Deep Variation Prior: Joint Image Denoising and Noise Variance
Estimation without Clean Data [2.3061446605472558]
This paper investigates the tasks of image denoising and noise variance estimation in a single, joint learning framework.
We build upon DVP, an unsupervised deep learning framework, that simultaneously learns a denoiser and estimates noise variances.
Our method does not require any clean training images or an external step of noise estimation, and instead, approximates the minimum mean squared error denoisers using only a set of noisy images.
arXiv Detail & Related papers (2022-09-19T17:29:32Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - CVF-SID: Cyclic multi-Variate Function for Self-Supervised Image
Denoising by Disentangling Noise from Image [53.76319163746699]
We propose a novel and powerful self-supervised denoising method called CVF-SID.
CVF-SID can disentangle a clean image and noise maps from the input by leveraging various self-supervised loss terms.
It achieves state-of-the-art self-supervised image denoising performance and is comparable to other existing approaches.
arXiv Detail & Related papers (2022-03-24T11:59:28Z) - Stochastic Image Denoising by Sampling from the Posterior Distribution [25.567566997688044]
We propose a novel denoising approach that produces viable and high quality results, while maintaining a small MSE.
Our method employs Langevin dynamics that relies on a repeated application of any given MMSE denoiser, obtaining the reconstructed image by effectively sampling from the posterior distribution.
Due to its perceptuality, the proposed algorithm can produce a variety of high-quality outputs for a given noisy input, all shown to be legitimate denoising results.
arXiv Detail & Related papers (2021-01-23T18:28:19Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.