Direct Unsupervised Denoising
- URL: http://arxiv.org/abs/2310.18116v2
- Date: Mon, 4 Dec 2023 17:38:31 GMT
- Title: Direct Unsupervised Denoising
- Authors: Benjamin Salmon and Alexander Krull
- Abstract summary: Unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate.
We present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency.
- Score: 60.71146161035649
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Traditional supervised denoisers are trained using pairs of noisy input and
clean target images. They learn to predict a central tendency of the posterior
distribution over possible clean images. When, e.g., trained with the popular
quadratic loss function, the network's output will correspond to the minimum
mean square error (MMSE) estimate. Unsupervised denoisers based on Variational
AutoEncoders (VAEs) have succeeded in achieving state-of-the-art results while
requiring only unpaired noisy data as training input. In contrast to the
traditional supervised approach, unsupervised denoisers do not directly produce
a single prediction, such as the MMSE estimate, but allow us to draw samples
from the posterior distribution of clean solutions corresponding to the noisy
input. To approximate the MMSE estimate during inference, unsupervised methods
have to create and draw a large number of samples - a computationally expensive
process - rendering the approach inapplicable in many situations. Here, we
present an alternative approach that trains a deterministic network alongside
the VAE to directly predict a central tendency. Our method achieves results
that surpass the results achieved by the unsupervised method at a fraction of
the computational cost.
Related papers
- Observation-Guided Diffusion Probabilistic Models [41.749374023639156]
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM)
Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain.
We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines.
arXiv Detail & Related papers (2023-10-06T06:29:06Z) - Self-supervised Image Denoising with Downsampled Invariance Loss and
Conditional Blind-Spot Network [12.478287906337194]
Most representative self-supervised denoisers are based on blind-spot networks.
A standard blind-spot network fails to reduce real camera noise due to the pixel-wise correlation of noise.
We propose a novel self-supervised training framework that can remove real noise.
arXiv Detail & Related papers (2023-04-19T08:55:27Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Denoising diffusion models for out-of-distribution detection [2.113925122479677]
We exploit the view of denoising probabilistic diffusion models (DDPM) as denoising autoencoders.
We use DDPMs to reconstruct an input that has been noised to a range of noise levels, and use the resulting multi-dimensional reconstruction error to classify out-of-distribution inputs.
arXiv Detail & Related papers (2022-11-14T20:35:11Z) - Evaluating Unsupervised Denoising Requires Unsupervised Metrics [16.067013621304348]
Unsupervised deep-learning methods have demonstrated impressive performance on benchmarks based on synthetic noise.
No metrics are available to evaluate these methods in an unsupervised fashion.
We propose two novel metrics: the unsupervised mean squared error (MSE) and the unsupervised peak signal-to-noise ratio (PSNR)
arXiv Detail & Related papers (2022-10-11T15:48:54Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - Fully Unsupervised Diversity Denoising with Convolutional Variational
Autoencoders [81.30960319178725]
We propose DivNoising, a denoising approach based on fully convolutional variational autoencoders (VAEs)
First we introduce a principled way of formulating the unsupervised denoising problem within the VAE framework by explicitly incorporating imaging noise models into the decoder.
We show that such a noise model can either be measured, bootstrapped from noisy data, or co-learned during training.
arXiv Detail & Related papers (2020-06-10T21:28:13Z) - Self-Supervised Fast Adaptation for Denoising via Meta-Learning [28.057705167363327]
We propose a new denoising approach that can greatly outperform the state-of-the-art supervised denoising methods.
We show that the proposed method can be easily employed with state-of-the-art denoising networks without additional parameters.
arXiv Detail & Related papers (2020-01-09T09:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.