Multi-stage image denoising with the wavelet transform
- URL: http://arxiv.org/abs/2209.12394v2
- Date: Tue, 27 Sep 2022 05:50:01 GMT
- Title: Multi-stage image denoising with the wavelet transform
- Authors: Chunwei Tian, Menghua Zheng, Wangmeng Zuo, Bob Zhang, Yanning Zhang,
David Zhang
- Abstract summary: Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
- Score: 125.2251438120701
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep convolutional neural networks (CNNs) are used for image denoising via
automatically mining accurate structure information. However, most of existing
CNNs depend on enlarging depth of designed networks to obtain better denoising
performance, which may cause training difficulty. In this paper, we propose a
multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three
stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet
transform and enhancement blocks (WEBs) and residual block (RB). DCB uses a
dynamic convolution to dynamically adjust parameters of several convolutions
for making a tradeoff between denoising performance and computational costs.
WEB uses a combination of signal processing technique (i.e., wavelet
transformation) and discriminative learning to suppress noise for recovering
more detailed information in image denoising. To further remove redundant
features, RB is used to refine obtained features for improving denoising
effects and reconstruct clean images via improved residual dense architectures.
Experimental results show that the proposed MWDCNN outperforms some popular
denoising methods in terms of quantitative and qualitative analysis. Codes are
available at https://github.com/hellloxiaotian/MWDCNN.
Related papers
- A cross Transformer for image denoising [83.68175077524111]
We propose a cross Transformer denoising CNN (CTNet) with a serial block (SB), a parallel block (PB), and a residual block (RB)
CTNet is superior to some popular denoising methods in terms of real and synthetic image denoising.
arXiv Detail & Related papers (2023-10-16T13:53:19Z) - Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis [148.16279746287452]
We propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block.
For the training data synthesis, we design a practical noise degradation model which takes into consideration different kinds of noise.
Experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-03-24T18:11:31Z) - Dynamic Slimmable Denoising Network [64.77565006158895]
Dynamic slimmable denoising network (DDSNet) is a general method to achieve good denoising quality with less computational complexity.
OurNet is empowered with the ability of dynamic inference by a dynamic gate.
Our experiments demonstrate our-Net consistently outperforms the state-of-the-art individually trained static denoising networks.
arXiv Detail & Related papers (2021-10-17T22:45:33Z) - Dense-Sparse Deep Convolutional Neural Networks Training for Image Denoising [0.6215404942415159]
Deep learning methods such as the convolutional neural networks have gained prominence in the area of image denoising.
Deep denoising convolutional neural networks use many feed-forward convolution layers with added regularization methods of batch normalization and residual learning to speed up training and improve denoising performance significantly.
In this paper, we show that by employing an enhanced dense-sparse-dense network training procedure to the deep denoising convolutional neural networks, comparable denoising performance level can be achieved at a significantly reduced number of trainable parameters.
arXiv Detail & Related papers (2021-07-10T15:14:19Z) - Image Denoising using Attention-Residual Convolutional Neural Networks [0.0]
We propose a new learning-based non-blind denoising technique named Attention Residual Convolutional Neural Network (ARCNN) and its extension to blind denoising named Flexible Attention Residual Convolutional Neural Network (FARCNN)
ARCNN achieved an overall average PSNR results of around 0.44dB and 0.96dB for Gaussian and Poisson denoising, respectively FARCNN presented very consistent results, even with slightly worsen performance compared to ARCNN.
arXiv Detail & Related papers (2021-01-19T16:37:57Z) - Progressive Training of Multi-level Wavelet Residual Networks for Image
Denoising [80.10533234415237]
This paper presents a multi-level wavelet residual network (MWRN) architecture as well as a progressive training scheme to improve image denoising performance.
Experiments on both synthetic and real-world noisy images show that our PT-MWRN performs favorably against the state-of-the-art denoising methods.
arXiv Detail & Related papers (2020-10-23T14:14:00Z) - Enhancement of a CNN-Based Denoiser Based on Spatial and Spectral
Analysis [23.11994688706024]
We propose a discrete wavelet denoising CNN (WDnCNN) which restores images corrupted by various noise with a single model.
To address this issue, we present a band normalization module (BNM) to normalize the coefficients from different parts of the frequency spectrum.
We evaluate the proposed WDnCNN, and compare it with other state-of-the-art denoisers.
arXiv Detail & Related papers (2020-06-28T05:25:50Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.