Deep Gaussian Denoiser Epistemic Uncertainty and Decoupled
Dual-Attention Fusion
- URL: http://arxiv.org/abs/2101.04631v2
- Date: Fri, 22 Jan 2021 11:05:15 GMT
- Title: Deep Gaussian Denoiser Epistemic Uncertainty and Decoupled
Dual-Attention Fusion
- Authors: Xiaoqi Ma, Xiaoyu Lin, Majed El Helou, Sabine S\"usstrunk
- Abstract summary: We focus on pushing the performance limits of state-of-the-art methods on Gaussian denoising.
We propose a model-agnostic approach for reducing epistemic uncertainty while using only a single pretrained network.
Our results significantly improve over the state-of-the-art baselines and across varying noise levels.
- Score: 11.085432358616671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Following the performance breakthrough of denoising networks, improvements
have come chiefly through novel architecture designs and increased depth. While
novel denoising networks were designed for real images coming from different
distributions, or for specific applications, comparatively small improvement
was achieved on Gaussian denoising. The denoising solutions suffer from
epistemic uncertainty that can limit further advancements. This uncertainty is
traditionally mitigated through different ensemble approaches. However, such
ensembles are prohibitively costly with deep networks, which are already large
in size.
Our work focuses on pushing the performance limits of state-of-the-art
methods on Gaussian denoising. We propose a model-agnostic approach for
reducing epistemic uncertainty while using only a single pretrained network. We
achieve this by tapping into the epistemic uncertainty through augmented and
frequency-manipulated images to obtain denoised images with varying error. We
propose an ensemble method with two decoupled attention paths, over the pixel
domain and over that of our different manipulations, to learn the final fusion.
Our results significantly improve over the state-of-the-art baselines and
across varying noise levels.
Related papers
- Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy [44.09909260046396]
We propose AdaptiveDiffusion to reduce noise prediction steps during the denoising process.
Our method can significantly speed up the denoising process while generating identical results to the original process, achieving up to an average 25x speedup.
arXiv Detail & Related papers (2024-10-13T15:19:18Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis [148.16279746287452]
We propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block.
For the training data synthesis, we design a practical noise degradation model which takes into consideration different kinds of noise.
Experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-03-24T18:11:31Z) - Dynamic Dual-Output Diffusion Models [100.32273175423146]
Iterative denoising-based generation has been shown to be comparable in quality to other classes of generative models.
A major drawback of this method is that it requires hundreds of iterations to produce a competitive result.
Recent works have proposed solutions that allow for faster generation with fewer iterations, but the image quality gradually deteriorates.
arXiv Detail & Related papers (2022-03-08T11:20:40Z) - Exploring ensembles and uncertainty minimization in denoising networks [0.522145960878624]
We propose a fusion model consisting of two attention modules, which focus on assigning the proper weights to pixels and channels.
The experimental results show that our model achieves better performance on top of the baseline of regular pre-trained denoising networks.
arXiv Detail & Related papers (2021-01-24T20:48:18Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Noise2Kernel: Adaptive Self-Supervised Blind Denoising using a Dilated
Convolutional Kernel Architecture [3.796436257221662]
We propose a dilated convolutional network that satisfies an invariant property, allowing efficient kernel-based training without random masking.
We also propose an adaptive self-supervision loss to circumvent the requirement of zero-mean constraint, which is specifically effective in removing salt-and-pepper or hybrid noise.
arXiv Detail & Related papers (2020-12-07T12:13:17Z) - Dual Adversarial Network: Toward Real-world Noise Removal and Noise
Generation [52.75909685172843]
Real-world image noise removal is a long-standing yet very challenging task in computer vision.
We propose a novel unified framework to deal with the noise removal and noise generation tasks.
Our method learns the joint distribution of the clean-noisy image pairs.
arXiv Detail & Related papers (2020-07-12T09:16:06Z) - Learning Model-Blind Temporal Denoisers without Ground Truths [46.778450578529814]
Denoisers trained with synthetic data often fail to cope with the diversity of unknown noises.
Previous image-based method leads to noise overfitting if directly applied to video denoisers.
We propose a general framework for video denoising networks that successfully addresses these challenges.
arXiv Detail & Related papers (2020-07-07T07:19:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.