Multi-wavelet residual dense convolutional neural network for image
denoising
- URL: http://arxiv.org/abs/2002.08301v1
- Date: Wed, 19 Feb 2020 17:21:37 GMT
- Title: Multi-wavelet residual dense convolutional neural network for image
denoising
- Authors: Shuo-Fei Wang, Wen-Kai Yu, and Ya-Xin Li
- Abstract summary: We use the short-term residual learning method to improve the performance and robustness of networks for image denoising tasks.
Here, we choose a multi-wavelet convolutional neural network (MWCNN) as the backbone, and insert residual dense blocks (RDBs) in its each layer.
Compared with other RDB-based networks, it can extract more features of the object from adjacent layers, preserve the large RF, and boost the computing efficiency.
- Score: 2.500475462213752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Networks with large receptive field (RF) have shown advanced fitting ability
in recent years. In this work, we utilize the short-term residual learning
method to improve the performance and robustness of networks for image
denoising tasks. Here, we choose a multi-wavelet convolutional neural network
(MWCNN), one of the state-of-art networks with large RF, as the backbone, and
insert residual dense blocks (RDBs) in its each layer. We call this scheme
multi-wavelet residual dense convolutional neural network (MWRDCNN). Compared
with other RDB-based networks, it can extract more features of the object from
adjacent layers, preserve the large RF, and boost the computing efficiency.
Meanwhile, this approach also provides a possibility of absorbing advantages of
multiple architectures in a single network without conflicts. The performance
of the proposed method has been demonstrated in extensive experiments with a
comparison with existing techniques.
Related papers
- MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table [62.164549651134465]
We propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality.
Our experiments with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.
arXiv Detail & Related papers (2023-04-25T05:44:50Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Image Superresolution using Scale-Recurrent Dense Network [30.75380029218373]
Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR)
We propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs))
Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-28T09:18:43Z) - Lightweight Image Super-Resolution with Multi-scale Feature Interaction
Network [15.846394239848959]
We present a lightweight multi-scale feature interaction network (MSFIN)
For lightweight SISR, MSFIN expands the receptive field and adequately exploits the informative features of the low-resolution observed images.
Our proposed MSFIN can achieve comparable performance against the state-of-the-arts with a more lightweight model.
arXiv Detail & Related papers (2021-03-24T07:25:21Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Residual Feature Distillation Network for Lightweight Image
Super-Resolution [40.52635571871426]
We propose a lightweight and accurate SISR model called residual feature distillation network (RFDN)
RFDN uses multiple feature distillation connections to learn more discriminative feature representations.
We also propose a shallow residual block (SRB) as the main building block of RFDN so that the network can benefit most from residual learning.
arXiv Detail & Related papers (2020-09-24T08:46:40Z) - Implicit Euler ODE Networks for Single-Image Dehazing [33.34490764631837]
We propose an efficient end-to-end multi-level implicit network (MI-Net) for the single image dehazing problem.
Our method outperforms existing methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-07-13T15:27:33Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.