Self-Supervised Image Denoising for Real-World Images with Context-aware
Transformer
- URL: http://arxiv.org/abs/2304.01627v1
- Date: Tue, 4 Apr 2023 08:30:50 GMT
- Title: Self-Supervised Image Denoising for Real-World Images with Context-aware
Transformer
- Authors: Dan Zhang, Fangfang Zhou
- Abstract summary: We propose a novel Denoise Transformer for real-world image denoising.
It is mainly constructed with Context-aware Denoise Transformer (CADT) units and Secondary Noise Extractor (SNE) block.
Experiments on the real-world SIDD benchmark achieve 50.62/0.990 for PSNR/SSIM, which is competitive with the current state-of-the-art method.
- Score: 3.767629593484917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the development of deep learning has been pushing image
denoising to a new level. Among them, self-supervised denoising is increasingly
popular because it does not require any prior knowledge. Most of the existing
self-supervised methods are based on convolutional neural networks (CNN), which
are restricted by the locality of the receptive field and would cause color
shifts or textures loss. In this paper, we propose a novel Denoise Transformer
for real-world image denoising, which is mainly constructed with Context-aware
Denoise Transformer (CADT) units and Secondary Noise Extractor (SNE) block.
CADT is designed as a dual-branch structure, where the global branch uses a
window-based Transformer encoder to extract the global information, while the
local branch focuses on the extraction of local features with small receptive
field. By incorporating CADT as basic components, we build a hierarchical
network to directly learn the noise distribution information through residual
learning and obtain the first stage denoised output. Then, we design SNE in low
computation for secondary global noise extraction. Finally the blind spots are
collected from the Denoise Transformer output and reconstructed, forming the
final denoised image. Extensive experiments on the real-world SIDD benchmark
achieve 50.62/0.990 for PSNR/SSIM, which is competitive with the current
state-of-the-art method and only 0.17/0.001 lower. Visual comparisons on public
sRGB, Raw-RGB and greyscale datasets prove that our proposed Denoise
Transformer has a competitive performance, especially on blurred textures and
low-light images, without using additional knowledge, e.g., noise level or
noise type, regarding the underlying unknown noise.
Related papers
- A cross Transformer for image denoising [83.68175077524111]
We propose a cross Transformer denoising CNN (CTNet) with a serial block (SB), a parallel block (PB), and a residual block (RB)
CTNet is superior to some popular denoising methods in terms of real and synthetic image denoising.
arXiv Detail & Related papers (2023-10-16T13:53:19Z) - Degradation-Noise-Aware Deep Unfolding Transformer for Hyperspectral
Image Denoising [9.119226249676501]
Hyperspectral images (HSIs) are often quite noisy because of narrow band spectral filtering.
To reduce the noise in HSI data cubes, both model-driven and learning-based denoising algorithms have been proposed.
This paper proposes a Degradation-Noise-Aware Unfolding Network (DNA-Net) that addresses these issues.
arXiv Detail & Related papers (2023-05-06T13:28:20Z) - Self-supervised Image Denoising with Downsampled Invariance Loss and
Conditional Blind-Spot Network [12.478287906337194]
Most representative self-supervised denoisers are based on blind-spot networks.
A standard blind-spot network fails to reduce real camera noise due to the pixel-wise correlation of noise.
We propose a novel self-supervised training framework that can remove real noise.
arXiv Detail & Related papers (2023-04-19T08:55:27Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis [148.16279746287452]
We propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block.
For the training data synthesis, we design a practical noise degradation model which takes into consideration different kinds of noise.
Experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-03-24T18:11:31Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Dual Adversarial Network: Toward Real-world Noise Removal and Noise
Generation [52.75909685172843]
Real-world image noise removal is a long-standing yet very challenging task in computer vision.
We propose a novel unified framework to deal with the noise removal and noise generation tasks.
Our method learns the joint distribution of the clean-noisy image pairs.
arXiv Detail & Related papers (2020-07-12T09:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.