CDLNet: Robust and Interpretable Denoising Through Deep Convolutional
Dictionary Learning
- URL: http://arxiv.org/abs/2103.04779v1
- Date: Fri, 5 Mar 2021 01:15:59 GMT
- Title: CDLNet: Robust and Interpretable Denoising Through Deep Convolutional
Dictionary Learning
- Authors: Nikola Janju\v{s}evi\'c, Amirhossein Khalilian-Gourtani, Yao Wang
- Abstract summary: Unrolled optimization networks propose an interpretable alternative to constructing deep neural networks.
We show that the proposed model outperforms the state-of-the-art denoising models when scaled to similar parameter count.
- Score: 6.6234935958112295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based methods hold state-of-the-art results in image denoising,
but remain difficult to interpret due to their construction from poorly
understood building blocks such as batch-normalization, residual learning, and
feature domain processing. Unrolled optimization networks propose an
interpretable alternative to constructing deep neural networks by deriving
their architecture from classical iterative optimization methods, without use
of tricks from the standard deep learning tool-box. So far, such methods have
demonstrated performance close to that of state-of-the-art models while using
their interpretable construction to achieve a comparably low learned parameter
count. In this work, we propose an unrolled convolutional dictionary learning
network (CDLNet) and demonstrate its competitive denoising performance in both
low and high parameter count regimes. Specifically, we show that the proposed
model outperforms the state-of-the-art denoising models when scaled to similar
parameter count. In addition, we leverage the model's interpretable
construction to propose an augmentation of the network's thresholds that
enables state-of-the-art blind denoising performance and near-perfect
generalization on noise-levels unseen during training.
Related papers
- Pivotal Auto-Encoder via Self-Normalizing ReLU [20.76999663290342]
We formalize single hidden layer sparse auto-encoders as a transform learning problem.
We propose an optimization problem that leads to a predictive model invariant to the noise level at test time.
Our experimental results demonstrate that the trained models yield a significant improvement in stability against varying types of noise.
arXiv Detail & Related papers (2024-06-23T09:06:52Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Improving Pre-trained Language Model Fine-tuning with Noise Stability
Regularization [94.4409074435894]
We propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR)
Specifically, we propose to inject the standard Gaussian noise and regularize hidden representations of the fine-tuned model.
We demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART.
arXiv Detail & Related papers (2022-06-12T04:42:49Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - CDLNet: Noise-Adaptive Convolutional Dictionary Learning Network for
Blind Denoising and Demosaicing [4.975707665155918]
Unrolled optimization networks present an interpretable alternative to constructing deep neural networks.
We propose an unrolled convolutional dictionary learning network (CDLNet) and demonstrate its competitive denoising and demosaicing (JDD) performance.
Specifically, we show that the proposed model outperforms state-of-the-art fully convolutional denoising and JDD models when scaled to a similar parameter count.
arXiv Detail & Related papers (2021-12-02T01:23:21Z) - Improved Model based Deep Learning using Monotone Operator Learning
(MOL) [25.077510176642807]
MoDL algorithms that rely on unrolling are emerging as powerful tools for image recovery.
We introduce a novel monotone operator learning framework to overcome some of the challenges associated with current unrolled frameworks.
We demonstrate the utility of the proposed scheme in the context of parallel MRI.
arXiv Detail & Related papers (2021-11-22T17:42:27Z) - Meta-Optimization of Deep CNN for Image Denoising Using LSTM [0.0]
We investigate the application of the meta-optimization training approach to the DnCNN denoising algorithm to enhance its denoising capability.
Our preliminary experiments on simpler algorithms reveal the prospects of utilizing the meta-optimization training approach towards the enhancement of the DnCNN denoising capability.
arXiv Detail & Related papers (2021-07-14T16:59:44Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Self-Supervised Fast Adaptation for Denoising via Meta-Learning [28.057705167363327]
We propose a new denoising approach that can greatly outperform the state-of-the-art supervised denoising methods.
We show that the proposed method can be easily employed with state-of-the-art denoising networks without additional parameters.
arXiv Detail & Related papers (2020-01-09T09:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.