Gabor is Enough: Interpretable Deep Denoising with a Gabor Synthesis
Dictionary Prior
- URL: http://arxiv.org/abs/2204.11146v1
- Date: Sat, 23 Apr 2022 22:21:54 GMT
- Title: Gabor is Enough: Interpretable Deep Denoising with a Gabor Synthesis
Dictionary Prior
- Authors: Nikola Janju\v{s}evi\'c, Amirhossein Khalilian-Gourtani, and Yao Wang
- Abstract summary: Gabor-like filters have been observed in the early layers of CNN classifiers and throughout low-level image processing networks.
In this work, we take this observation to the extreme and explicitly constrain the filters of a natural-image denoising CNN to be learned 2D real Gabor filters.
We find that the proposed network (GDLNet) can achieve near state-of-the-art denoising performance amongst popular fully convolutional neural networks.
- Score: 6.297103076360578
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Image processing neural networks, natural and artificial, have a long history
with orientation-selectivity, often described mathematically as Gabor filters.
Gabor-like filters have been observed in the early layers of CNN classifiers
and even throughout low-level image processing networks. In this work, we take
this observation to the extreme and explicitly constrain the filters of a
natural-image denoising CNN to be learned 2D real Gabor filters. Surprisingly,
we find that the proposed network (GDLNet) can achieve near state-of-the-art
denoising performance amongst popular fully convolutional neural networks, with
only a fraction of the learned parameters. We further verify that this
parameterization maintains the noise-level generalization (training vs.
inference mismatch) characteristics of the base network, and investigate the
contribution of individual Gabor filter parameters to the performance of the
denoiser. We present positive findings for the interpretation of dictionary
learning networks as performing accelerated sparse-coding via the importance of
untied learned scale parameters between network layers. Our network's success
suggests that representations used by low-level image processing CNNs can be as
simple and interpretable as Gabor filterbanks.
Related papers
- PICNN: A Pathway towards Interpretable Convolutional Neural Networks [12.31424771480963]
We introduce a novel pathway to alleviate the entanglement between filters and image classes.
We use the Bernoulli sampling to generate the filter-cluster assignment matrix from a learnable filter-class correspondence matrix.
We evaluate the effectiveness of our method on ten widely used network architectures.
arXiv Detail & Related papers (2023-12-19T11:36:03Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - New SAR target recognition based on YOLO and very deep multi-canonical
correlation analysis [0.1503974529275767]
This paper proposes a robust feature extraction method for SAR image target classification by adaptively fusing effective features from different CNN layers.
Experiments on the MSTAR dataset demonstrate that the proposed method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-28T18:10:26Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z) - Training Interpretable Convolutional Neural Networks by Differentiating
Class-specific Filters [64.46270549587004]
Convolutional neural networks (CNNs) have been successfully used in a range of tasks.
CNNs are often viewed as "black-box" and lack of interpretability.
We propose a novel strategy to train interpretable CNNs by encouraging class-specific filters.
arXiv Detail & Related papers (2020-07-16T09:12:26Z) - The Neural Tangent Link Between CNN Denoisers and Non-Local Filters [4.254099382808598]
Convolutional Neural Networks (CNNs) are now a well-established tool for solving computational imaging problems.
We introduce a formal link between such networks through their neural kernel tangent (NTK) and well-known non-local filtering techniques.
We evaluate our findings via extensive image denoising experiments.
arXiv Detail & Related papers (2020-06-03T16:50:54Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z) - Gabor Convolutional Networks [103.87356592690669]
We propose a new deep model, termed Gabor Convolutional Networks (GCNs), which incorporates Gabor filters into deep convolutional neural networks (DCNNs)
GCNs can be easily implemented and are compatible with any popular deep learning architecture.
Experimental results demonstrate the super capability of our algorithm in recognizing objects, where the scale and rotation changes occur frequently.
arXiv Detail & Related papers (2017-05-03T14:37:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.