Exploring Efficient Asymmetric Blind-Spots for Self-Supervised Denoising in Real-World Scenarios
- URL: http://arxiv.org/abs/2303.16783v2
- Date: Thu, 11 Apr 2024 13:07:43 GMT
- Title: Exploring Efficient Asymmetric Blind-Spots for Self-Supervised Denoising in Real-World Scenarios
- Authors: Shiyan Chen, Jiyuan Zhang, Zhaofei Yu, Tiejun Huang,
- Abstract summary: Noise in real-world scenarios is often spatially correlated, which causes many self-supervised algorithms to perform poorly.
We propose Asymmetric Tunable Blind-Spot Network (AT-BSN), where the blind-spot size can be freely adjusted.
We show that our method achieves state-of-the-art, and is superior to other self-supervised algorithms in terms of computational overhead and visual effects.
- Score: 44.31657750561106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised denoising has attracted widespread attention due to its ability to train without clean images. However, noise in real-world scenarios is often spatially correlated, which causes many self-supervised algorithms that assume pixel-wise independent noise to perform poorly. Recent works have attempted to break noise correlation with downsampling or neighborhood masking. However, denoising on downsampled subgraphs can lead to aliasing effects and loss of details due to a lower sampling rate. Furthermore, the neighborhood masking methods either come with high computational complexity or do not consider local spatial preservation during inference. Through the analysis of existing methods, we point out that the key to obtaining high-quality and texture-rich results in real-world self-supervised denoising tasks is to train at the original input resolution structure and use asymmetric operations during training and inference. Based on this, we propose Asymmetric Tunable Blind-Spot Network (AT-BSN), where the blind-spot size can be freely adjusted, thus better balancing noise correlation suppression and image local spatial destruction during training and inference. In addition, we regard the pre-trained AT-BSN as a meta-teacher network capable of generating various teacher networks by sampling different blind-spots. We propose a blind-spot based multi-teacher distillation strategy to distill a lightweight network, significantly improving performance. Experimental results on multiple datasets prove that our method achieves state-of-the-art, and is superior to other self-supervised algorithms in terms of computational overhead and visual effects.
Related papers
- Low-Trace Adaptation of Zero-shot Self-supervised Blind Image Denoising [23.758547513866766]
We propose a trace-constraint loss function and low-trace adaptation Noise2Noise (LoTA-N2N) model to bridge the gap between self-supervised and supervised learning.
Our method achieves state-of-the-art performance within the realm of zero-shot self-supervised image denoising approaches.
arXiv Detail & Related papers (2024-03-19T02:47:33Z) - Random Sub-Samples Generation for Self-Supervised Real Image Denoising [9.459398471988724]
We propose a novel self-supervised real image denoising framework named Sampling Difference As Perturbation (SDAP)
We find that adding an appropriate perturbation to the training images can effectively improve the performance of BSN.
The results show that it significantly outperforms other state-of-the-art self-supervised denoising methods on real-world datasets.
arXiv Detail & Related papers (2023-07-31T16:39:35Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Self-supervised Image Denoising with Downsampled Invariance Loss and
Conditional Blind-Spot Network [12.478287906337194]
Most representative self-supervised denoisers are based on blind-spot networks.
A standard blind-spot network fails to reduce real camera noise due to the pixel-wise correlation of noise.
We propose a novel self-supervised training framework that can remove real noise.
arXiv Detail & Related papers (2023-04-19T08:55:27Z) - I2V: Towards Texture-Aware Self-Supervised Blind Denoising using
Self-Residual Learning for Real-World Images [8.763680382529412]
pixel-shuffle downsampling (PD) has been proposed to eliminate the spatial correlation of noise.
We propose self-residual learning without the PD process to maintain texture information.
The results of extensive experiments show that the proposed method outperforms state-of-the-art self-supervised blind denoising approaches.
arXiv Detail & Related papers (2023-02-21T08:51:17Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - AP-BSN: Self-Supervised Denoising for Real-World Images via Asymmetric
PD and Blind-Spot Network [60.650035708621786]
Blind-spot network (BSN) and its variants have made significant advances in self-supervised denoising.
It is challenging to deal with spatially correlated real-world noise using self-supervised BSN.
Recently, pixel-shuffle downsampling (PD) has been proposed to remove the spatial correlation of real-world noise.
We propose an Asymmetric PD (AP) to address this issue, which introduces different PD stride factors for training and inference.
arXiv Detail & Related papers (2022-03-22T15:04:37Z) - Joint self-supervised blind denoising and noise estimation [0.0]
Two neural networks jointly predict the clean signal and infer the noise distribution.
We show empirically with synthetic noisy data that our model captures the noise distribution efficiently.
arXiv Detail & Related papers (2021-02-16T08:37:47Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.