Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising
- URL: http://arxiv.org/abs/2010.11971v1
- Date: Thu, 22 Oct 2020 18:12:26 GMT
- Title: Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising
- Authors: Yaochen Xie, Zhengyang Wang, Shuiwang Ji
- Abstract summary: We introduce Noise2Same, a novel self-supervised denoising framework.
In particular, Noise2Same requires neither J-invariance nor extra information about the noise model.
Our results show that our Noise2Same remarkably outperforms previous self-supervised denoising methods.
- Score: 54.730707387866076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised frameworks that learn denoising models with merely individual
noisy images have shown strong capability and promising performance in various
image denoising tasks. Existing self-supervised denoising frameworks are mostly
built upon the same theoretical foundation, where the denoising models are
required to be J-invariant. However, our analyses indicate that the current
theory and the J-invariance may lead to denoising models with reduced
performance. In this work, we introduce Noise2Same, a novel self-supervised
denoising framework. In Noise2Same, a new self-supervised loss is proposed by
deriving a self-supervised upper bound of the typical supervised loss. In
particular, Noise2Same requires neither J-invariance nor extra information
about the noise model and can be used in a wider range of denoising
applications. We analyze our proposed Noise2Same both theoretically and
experimentally. The experimental results show that our Noise2Same remarkably
outperforms previous self-supervised denoising methods in terms of denoising
performance and training efficiency. Our code is available at
https://github.com/divelab/Noise2Same.
Related papers
- Low-Trace Adaptation of Zero-shot Self-supervised Blind Image Denoising [23.758547513866766]
We propose a trace-constraint loss function and low-trace adaptation Noise2Noise (LoTA-N2N) model to bridge the gap between self-supervised and supervised learning.
Our method achieves state-of-the-art performance within the realm of zero-shot self-supervised image denoising approaches.
arXiv Detail & Related papers (2024-03-19T02:47:33Z) - Deep Variation Prior: Joint Image Denoising and Noise Variance
Estimation without Clean Data [2.3061446605472558]
This paper investigates the tasks of image denoising and noise variance estimation in a single, joint learning framework.
We build upon DVP, an unsupervised deep learning framework, that simultaneously learns a denoiser and estimates noise variances.
Our method does not require any clean training images or an external step of noise estimation, and instead, approximates the minimum mean squared error denoisers using only a set of noisy images.
arXiv Detail & Related papers (2022-09-19T17:29:32Z) - Noise2SR: Learning to Denoise from Super-Resolved Single Noisy
Fluorescence Image [9.388253054229155]
Noise2SR is designed for training with paired noisy images of different dimensions.
It is more efficiently self-supervised and able to restore more image details from a single noisy observation.
We envision that Noise2SR has the potential to improve more other kind of scientific imaging quality.
arXiv Detail & Related papers (2022-09-14T04:44:41Z) - Noise2NoiseFlow: Realistic Camera Noise Modeling without Clean Images [35.29066692454865]
This paper proposes a framework for training a noise model and a denoiser simultaneously.
It relies on pairs of noisy images rather than noisy/clean paired image data.
The trained denoiser is shown to significantly improve upon both supervised and weakly supervised baseline denoising approaches.
arXiv Detail & Related papers (2022-06-02T15:31:40Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Unpaired Learning of Deep Image Denoising [80.34135728841382]
This paper presents a two-stage scheme by incorporating self-supervised learning and knowledge distillation.
For self-supervised learning, we suggest a dilated blind-spot network (D-BSN) to learn denoising solely from real noisy images.
Experiments show that our unpaired learning method performs favorably on both synthetic noisy images and real-world noisy photographs.
arXiv Detail & Related papers (2020-08-31T16:22:40Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.