CDLNet: Noise-Adaptive Convolutional Dictionary Learning Network for
Blind Denoising and Demosaicing
- URL: http://arxiv.org/abs/2112.00913v1
- Date: Thu, 2 Dec 2021 01:23:21 GMT
- Title: CDLNet: Noise-Adaptive Convolutional Dictionary Learning Network for
Blind Denoising and Demosaicing
- Authors: Nikola Janju\v{s}evi\'c, Amirhossein Kalilian-Gourtani, and Yao Wang
- Abstract summary: Unrolled optimization networks present an interpretable alternative to constructing deep neural networks.
We propose an unrolled convolutional dictionary learning network (CDLNet) and demonstrate its competitive denoising and demosaicing (JDD) performance.
Specifically, we show that the proposed model outperforms state-of-the-art fully convolutional denoising and JDD models when scaled to a similar parameter count.
- Score: 4.975707665155918
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning based methods hold state-of-the-art results in low-level image
processing tasks, but remain difficult to interpret due to their black-box
construction. Unrolled optimization networks present an interpretable
alternative to constructing deep neural networks by deriving their architecture
from classical iterative optimization methods without use of tricks from the
standard deep learning tool-box. So far, such methods have demonstrated
performance close to that of state-of-the-art models while using their
interpretable construction to achieve a comparably low learned parameter count.
In this work, we propose an unrolled convolutional dictionary learning network
(CDLNet) and demonstrate its competitive denoising and joint denoising and
demosaicing (JDD) performance both in low and high parameter count regimes.
Specifically, we show that the proposed model outperforms state-of-the-art
fully convolutional denoising and JDD models when scaled to a similar parameter
count. In addition, we leverage the model's interpretable construction to
propose a noise-adaptive parameterization of thresholds in the network that
enables state-of-the-art blind denoising performance, and near perfect
generalization on noise-levels unseen during training. Furthermore, we show
that such performance extends to the JDD task and unsupervised learning.
Related papers
- Pivotal Auto-Encoder via Self-Normalizing ReLU [20.76999663290342]
We formalize single hidden layer sparse auto-encoders as a transform learning problem.
We propose an optimization problem that leads to a predictive model invariant to the noise level at test time.
Our experimental results demonstrate that the trained models yield a significant improvement in stability against varying types of noise.
arXiv Detail & Related papers (2024-06-23T09:06:52Z) - Adv-KD: Adversarial Knowledge Distillation for Faster Diffusion Sampling [2.91204440475204]
Diffusion Probabilistic Models (DPMs) have emerged as a powerful class of deep generative models.
They rely on sequential denoising steps during sample generation.
We propose a novel method that integrates denoising phases directly into the model's architecture.
arXiv Detail & Related papers (2024-05-31T08:19:44Z) - Improving Pre-trained Language Model Fine-tuning with Noise Stability
Regularization [94.4409074435894]
We propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR)
Specifically, we propose to inject the standard Gaussian noise and regularize hidden representations of the fine-tuned model.
We demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART.
arXiv Detail & Related papers (2022-06-12T04:42:49Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Meta-Optimization of Deep CNN for Image Denoising Using LSTM [0.0]
We investigate the application of the meta-optimization training approach to the DnCNN denoising algorithm to enhance its denoising capability.
Our preliminary experiments on simpler algorithms reveal the prospects of utilizing the meta-optimization training approach towards the enhancement of the DnCNN denoising capability.
arXiv Detail & Related papers (2021-07-14T16:59:44Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - CDLNet: Robust and Interpretable Denoising Through Deep Convolutional
Dictionary Learning [6.6234935958112295]
Unrolled optimization networks propose an interpretable alternative to constructing deep neural networks.
We show that the proposed model outperforms the state-of-the-art denoising models when scaled to similar parameter count.
arXiv Detail & Related papers (2021-03-05T01:15:59Z) - Simultaneous Denoising and Dereverberation Using Deep Embedding Features [64.58693911070228]
We propose a joint training method for simultaneous speech denoising and dereverberation using deep embedding features.
At the denoising stage, the DC network is leveraged to extract noise-free deep embedding features.
At the dereverberation stage, instead of using the unsupervised K-means clustering algorithm, another neural network is utilized to estimate the anechoic speech.
arXiv Detail & Related papers (2020-04-06T06:34:01Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Self-Supervised Fast Adaptation for Denoising via Meta-Learning [28.057705167363327]
We propose a new denoising approach that can greatly outperform the state-of-the-art supervised denoising methods.
We show that the proposed method can be easily employed with state-of-the-art denoising networks without additional parameters.
arXiv Detail & Related papers (2020-01-09T09:40:53Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.