Self-Supervised Fast Adaptation for Denoising via Meta-Learning
- URL: http://arxiv.org/abs/2001.02899v1
- Date: Thu, 9 Jan 2020 09:40:53 GMT
- Title: Self-Supervised Fast Adaptation for Denoising via Meta-Learning
- Authors: Seunghwan Lee, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
- Abstract summary: We propose a new denoising approach that can greatly outperform the state-of-the-art supervised denoising methods.
We show that the proposed method can be easily employed with state-of-the-art denoising networks without additional parameters.
- Score: 28.057705167363327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Under certain statistical assumptions of noise, recent self-supervised
approaches for denoising have been introduced to learn network parameters
without true clean images, and these methods can restore an image by exploiting
information available from the given input (i.e., internal statistics) at test
time. However, self-supervised methods are not yet combined with conventional
supervised denoising methods which train the denoising networks with a large
number of external training samples. Thus, we propose a new denoising approach
that can greatly outperform the state-of-the-art supervised denoising methods
by adapting their network parameters to the given input through selfsupervision
without changing the networks architectures. Moreover, we propose a
meta-learning algorithm to enable quick adaptation of parameters to the
specific input at test time. We demonstrate that the proposed method can be
easily employed with state-of-the-art denoising networks without additional
parameters, and achieve state-of-the-art performance on numerous benchmark
datasets.
Related papers
- Direct Unsupervised Denoising [60.71146161035649]
Unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate.
We present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency.
arXiv Detail & Related papers (2023-10-27T13:02:12Z) - Self2Self+: Single-Image Denoising with Self-Supervised Learning and
Image Quality Assessment Loss [4.035753155957699]
The proposed method achieves state-of-the-art denoising performance on both synthetic and real-world datasets.
This highlights the effectiveness and practicality of our method as a potential solution for various noise removal tasks.
arXiv Detail & Related papers (2023-07-20T08:38:01Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - Noise2Kernel: Adaptive Self-Supervised Blind Denoising using a Dilated
Convolutional Kernel Architecture [3.796436257221662]
We propose a dilated convolutional network that satisfies an invariant property, allowing efficient kernel-based training without random masking.
We also propose an adaptive self-supervision loss to circumvent the requirement of zero-mean constraint, which is specifically effective in removing salt-and-pepper or hybrid noise.
arXiv Detail & Related papers (2020-12-07T12:13:17Z) - Learning Model-Blind Temporal Denoisers without Ground Truths [46.778450578529814]
Denoisers trained with synthetic data often fail to cope with the diversity of unknown noises.
Previous image-based method leads to noise overfitting if directly applied to video denoisers.
We propose a general framework for video denoising networks that successfully addresses these challenges.
arXiv Detail & Related papers (2020-07-07T07:19:48Z) - Restore from Restored: Single Image Denoising with Pseudo Clean Image [28.38369890008251]
We propose a simple and effective fine-tuning algorithm called "restore-from-restored"
Our method can be easily employed on top of the state-of-the-art denoising networks.
arXiv Detail & Related papers (2020-03-09T17:35:31Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.