Unsharp Mask Guided Filtering
- URL: http://arxiv.org/abs/2106.01428v1
- Date: Wed, 2 Jun 2021 19:15:34 GMT
- Title: Unsharp Mask Guided Filtering
- Authors: Zenglin Shi, Yunlu Chen, Efstratios Gavves, Pascal Mettes, and Cees
G.M. Snoek
- Abstract summary: The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
- Score: 53.14430987860308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of this paper is guided image filtering, which emphasizes the
importance of structure transfer during filtering by means of an additional
guidance image. Where classical guided filters transfer structures using
hand-designed functions, recent guided filters have been considerably advanced
through parametric learning of deep networks. The state-of-the-art leverages
deep networks to estimate the two core coefficients of the guided filter. In
this work, we posit that simultaneously estimating both coefficients is
suboptimal, resulting in halo artifacts and structure inconsistencies. Inspired
by unsharp masking, a classical technique for edge enhancement that requires
only a single coefficient, we propose a new and simplified formulation of the
guided filter. Our formulation enjoys a filtering prior from a low-pass filter
and enables explicit structure transfer by estimating a single coefficient.
Based on our proposed formulation, we introduce a successive guided filtering
network, which provides multiple filtering results from a single network,
allowing for a trade-off between accuracy and efficiency. Extensive ablations,
comparisons and analysis show the effectiveness and efficiency of our
formulation and network, resulting in state-of-the-art results across filtering
tasks like upsampling, denoising, and cross-modality filtering. Code is
available at \url{https://github.com/shizenglin/Unsharp-Mask-Guided-Filtering}.
Related papers
- Structured Network Pruning by Measuring Filter-wise Interactions [6.037167142826297]
We propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction)
During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength.
We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models.
arXiv Detail & Related papers (2023-07-03T05:26:05Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Learning Versatile Convolution Filters for Efficient Visual Recognition [125.34595948003745]
This paper introduces versatile filters to construct efficient convolutional neural networks.
We conduct theoretical analysis on network complexity and an efficient convolution scheme is introduced.
Experimental results on benchmark datasets and neural networks demonstrate that our versatile filters are able to achieve comparable accuracy as that of original filters.
arXiv Detail & Related papers (2021-09-20T06:07:14Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.