Masked Pre-trained Model Enables Universal Zero-shot Denoiser
- URL: http://arxiv.org/abs/2401.14966v1
- Date: Fri, 26 Jan 2024 15:58:57 GMT
- Title: Masked Pre-trained Model Enables Universal Zero-shot Denoiser
- Authors: Xiaoxiao Ma, Zhixiang Wei, Yi Jin, Pengyang Ling, Tianle Liu, Ben
Wang, Junkang Dai, Huaian Chen, Enhong Chen
- Abstract summary: We propose a novel zero-shot denoising paradigm, i.e., Masked Pre-train then Iterative fill (MPI)
MPI pre-trains a model with masking and fine-tunes it for denoising of a single image with unseen noise degradation.
Iterative filling is devised to efficiently fuse pre-trained knowledge for denoising.
- Score: 48.890384388203735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we observe that the model, which is trained on vast general
images using masking strategy, has been naturally embedded with the
distribution knowledge regarding natural images, and thus spontaneously attains
the underlying potential for strong image denoising. Based on this observation,
we propose a novel zero-shot denoising paradigm, i.e., Masked Pre-train then
Iterative fill (MPI). MPI pre-trains a model with masking and fine-tunes it for
denoising of a single image with unseen noise degradation. Concretely, the
proposed MPI comprises two key procedures: 1) Masked Pre-training involves
training a model on multiple natural images with random masks to gather
generalizable representations, allowing for practical applications in varying
noise degradation and even in distinct image types. 2) Iterative filling is
devised to efficiently fuse pre-trained knowledge for denoising. Similar to but
distinct from pre-training, random masking is retained to bridge the gap, but
only the predicted parts covered by masks are assembled for efficiency, which
enables high-quality denoising within a limited number of iterations.
Comprehensive experiments across various noisy scenarios underscore the notable
advances of proposed MPI over previous approaches with a marked reduction in
inference time. Code is available at https://github.com/krennic999/MPI.git.
Related papers
- Positive2Negative: Breaking the Information-Lossy Barrier in Self-Supervised Single Image Denoising [26.67217493971613]
Existing self-supervised image denoising paradigms rely heavily on information-lossy operations.
We propose a novel self-supervised single image denoising paradigm, Positive2Negative, to break the information-lossy barrier.
Our paradigm achieves state-of-the-art performance in self-supervised single image denoising with significant speed improvements.
arXiv Detail & Related papers (2024-12-21T03:25:01Z) - Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning [49.275450836604726]
We present a novel frequency-based Self-Supervised Learning (SSL) approach that significantly enhances its efficacy for pre-training.
We employ a two-branch framework empowered by knowledge distillation, enabling the model to take both the filtered and original images as input.
arXiv Detail & Related papers (2024-09-16T15:10:07Z) - Self-Calibrated Variance-Stabilizing Transformations for Real-World Image Denoising [19.08732222562782]
Supervised deep learning has become the method of choice for image denoising.
We show that, contrary to popular belief, denoising networks specialized in the removal of Gaussian noise can be efficiently leveraged in favor of real-world image denoising.
arXiv Detail & Related papers (2024-07-24T16:23:46Z) - Score Priors Guided Deep Variational Inference for Unsupervised
Real-World Single Image Denoising [14.486289176696438]
We propose a score priors-guided deep variational inference, namely ScoreDVI, for practical real-world denoising.
We exploit a Non-$i.i.d$ Gaussian mixture model and variational noise posterior to model the real-world noise.
Our method outperforms other single image-based real-world denoising methods and achieves comparable performance to dataset-based unsupervised methods.
arXiv Detail & Related papers (2023-08-09T03:26:58Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Dual Adversarial Network: Toward Real-world Noise Removal and Noise
Generation [52.75909685172843]
Real-world image noise removal is a long-standing yet very challenging task in computer vision.
We propose a novel unified framework to deal with the noise removal and noise generation tasks.
Our method learns the joint distribution of the clean-noisy image pairs.
arXiv Detail & Related papers (2020-07-12T09:16:06Z) - Fully Unsupervised Diversity Denoising with Convolutional Variational
Autoencoders [81.30960319178725]
We propose DivNoising, a denoising approach based on fully convolutional variational autoencoders (VAEs)
First we introduce a principled way of formulating the unsupervised denoising problem within the VAE framework by explicitly incorporating imaging noise models into the decoder.
We show that such a noise model can either be measured, bootstrapped from noisy data, or co-learned during training.
arXiv Detail & Related papers (2020-06-10T21:28:13Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.