Meta-Optimization of Deep CNN for Image Denoising Using LSTM
- URL: http://arxiv.org/abs/2107.06845v1
- Date: Wed, 14 Jul 2021 16:59:44 GMT
- Title: Meta-Optimization of Deep CNN for Image Denoising Using LSTM
- Authors: Basit O. Alawode, Motaz Alfarraj
- Abstract summary: We investigate the application of the meta-optimization training approach to the DnCNN denoising algorithm to enhance its denoising capability.
Our preliminary experiments on simpler algorithms reveal the prospects of utilizing the meta-optimization training approach towards the enhancement of the DnCNN denoising capability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent application of deep learning (DL) to various tasks has seen the
performance of classical techniques surpassed by their DL-based counterparts.
As a result, DL has equally seen application in the removal of noise from
images. In particular, the use of deep feed-forward convolutional neural
networks (DnCNNs) has been investigated for denoising. It utilizes advances in
DL techniques such as deep architecture, residual learning, and batch
normalization to achieve better denoising performance when compared with the
other classical state-of-the-art denoising algorithms. However, its deep
architecture resulted in a huge set of trainable parameters. Meta-optimization
is a training approach of enabling algorithms to learn to train themselves by
themselves. Training algorithms using meta-optimizers have been shown to enable
algorithms to achieve better performance when compared to the classical
gradient descent-based training approach. In this work, we investigate the
application of the meta-optimization training approach to the DnCNN denoising
algorithm to enhance its denoising capability. Our preliminary experiments on
simpler algorithms reveal the prospects of utilizing the meta-optimization
training approach towards the enhancement of the DnCNN denoising capability.
Related papers
- Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Multi-stage image denoising with the wavelet transform [125.2251438120701]
Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
arXiv Detail & Related papers (2022-09-26T03:28:23Z) - CDLNet: Noise-Adaptive Convolutional Dictionary Learning Network for
Blind Denoising and Demosaicing [4.975707665155918]
Unrolled optimization networks present an interpretable alternative to constructing deep neural networks.
We propose an unrolled convolutional dictionary learning network (CDLNet) and demonstrate its competitive denoising and demosaicing (JDD) performance.
Specifically, we show that the proposed model outperforms state-of-the-art fully convolutional denoising and JDD models when scaled to a similar parameter count.
arXiv Detail & Related papers (2021-12-02T01:23:21Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - Dense-Sparse Deep Convolutional Neural Networks Training for Image Denoising [0.6215404942415159]
Deep learning methods such as the convolutional neural networks have gained prominence in the area of image denoising.
Deep denoising convolutional neural networks use many feed-forward convolution layers with added regularization methods of batch normalization and residual learning to speed up training and improve denoising performance significantly.
In this paper, we show that by employing an enhanced dense-sparse-dense network training procedure to the deep denoising convolutional neural networks, comparable denoising performance level can be achieved at a significantly reduced number of trainable parameters.
arXiv Detail & Related papers (2021-07-10T15:14:19Z) - CDLNet: Robust and Interpretable Denoising Through Deep Convolutional
Dictionary Learning [6.6234935958112295]
Unrolled optimization networks propose an interpretable alternative to constructing deep neural networks.
We show that the proposed model outperforms the state-of-the-art denoising models when scaled to similar parameter count.
arXiv Detail & Related papers (2021-03-05T01:15:59Z) - Exploring ensembles and uncertainty minimization in denoising networks [0.522145960878624]
We propose a fusion model consisting of two attention modules, which focus on assigning the proper weights to pixels and channels.
The experimental results show that our model achieves better performance on top of the baseline of regular pre-trained denoising networks.
arXiv Detail & Related papers (2021-01-24T20:48:18Z) - Evolving Deep Convolutional Neural Networks for Hyperspectral Image
Denoising [6.869192200282213]
We propose a novel algorithm to automatically build an optimal Convolutional Neural Network (CNN) to effectively denoise HSIs.
The experiments of the proposed algorithm have been well-designed and compared against the state-of-the-art peer competitors.
arXiv Detail & Related papers (2020-08-15T03:04:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.