Unsupervised Image Denoising with Frequency Domain Knowledge
- URL: http://arxiv.org/abs/2111.14362v1
- Date: Mon, 29 Nov 2021 07:41:32 GMT
- Title: Unsupervised Image Denoising with Frequency Domain Knowledge
- Authors: Nahyun Kim, Donggon Jang, Sunhyeok Lee, Bomi Kim, Dae-Shik Kim
- Abstract summary: Supervised learning-based methods yield robust denoising results, yet they are inherently limited by the need for large-scale datasets.
In this study we propose a frequency-sensitive unsupervised denoising method.
Results using natural and synthetic datasets indicate that our unsupervised learning method augmented with frequency information achieves state-of-the-art denoising performance.
- Score: 2.834895018689047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised learning-based methods yield robust denoising results, yet they
are inherently limited by the need for large-scale clean/noisy paired datasets.
The use of unsupervised denoisers, on the other hand, necessitates a more
detailed understanding of the underlying image statistics. In particular, it is
well known that apparent differences between clean and noisy images are most
prominent on high-frequency bands, justifying the use of low-pass filters as
part of conventional image preprocessing steps. However, most learning-based
denoising methods utilize only one-sided information from the spatial domain
without considering frequency domain information. To address this limitation,
in this study we propose a frequency-sensitive unsupervised denoising method.
To this end, a generative adversarial network (GAN) is used as a base
structure. Subsequently, we include spectral discriminator and frequency
reconstruction loss to transfer frequency knowledge into the generator. Results
using natural and synthetic datasets indicate that our unsupervised learning
method augmented with frequency information achieves state-of-the-art denoising
performance, suggesting that frequency domain information could be a viable
factor in improving the overall performance of unsupervised learning-based
methods.
Related papers
- Representing Noisy Image Without Denoising [91.73819173191076]
Fractional-order Moments in Radon space (FMR) is designed to derive robust representation directly from noisy images.
Unlike earlier integer-order methods, our work is a more generic design taking such classical methods as special cases.
arXiv Detail & Related papers (2023-01-18T10:13:29Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - Exploring Inter-frequency Guidance of Image for Lightweight Gaussian
Denoising [1.52292571922932]
We propose a novel network architecture denoted as IGNet, in order to refine the frequency bands from low to high in a progressive manner.
With this design, more inter-frequency prior and information are utilized, thus the model size can be lightened while still perserves competitive results.
arXiv Detail & Related papers (2021-12-22T10:35:53Z) - Image Denoising using Attention-Residual Convolutional Neural Networks [0.0]
We propose a new learning-based non-blind denoising technique named Attention Residual Convolutional Neural Network (ARCNN) and its extension to blind denoising named Flexible Attention Residual Convolutional Neural Network (FARCNN)
ARCNN achieved an overall average PSNR results of around 0.44dB and 0.96dB for Gaussian and Poisson denoising, respectively FARCNN presented very consistent results, even with slightly worsen performance compared to ARCNN.
arXiv Detail & Related papers (2021-01-19T16:37:57Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Improving Blind Spot Denoising for Microscopy [73.94017852757413]
We present a novel way to improve the quality of self-supervised denoising.
We assume the clean image to be the result of a convolution with a point spread function (PSF) and explicitly include this operation at the end of our neural network.
arXiv Detail & Related papers (2020-08-19T13:06:24Z) - Learning Model-Blind Temporal Denoisers without Ground Truths [46.778450578529814]
Denoisers trained with synthetic data often fail to cope with the diversity of unknown noises.
Previous image-based method leads to noise overfitting if directly applied to video denoisers.
We propose a general framework for video denoising networks that successfully addresses these challenges.
arXiv Detail & Related papers (2020-07-07T07:19:48Z) - ADRN: Attention-based Deep Residual Network for Hyperspectral Image
Denoising [52.01041506447195]
We propose an attention-based deep residual network to learn a mapping from noisy HSI to the clean one.
Experimental results demonstrate that our proposed ADRN scheme outperforms the state-of-the-art methods both in quantitative and visual evaluations.
arXiv Detail & Related papers (2020-03-04T08:36:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.