Learning Versatile Convolution Filters for Efficient Visual Recognition
- URL: http://arxiv.org/abs/2109.09310v1
- Date: Mon, 20 Sep 2021 06:07:14 GMT
- Title: Learning Versatile Convolution Filters for Efficient Visual Recognition
- Authors: Kai Han, Yunhe Wang, Chang Xu, Chunjing Xu, Enhua Wu, Dacheng Tao
- Abstract summary: This paper introduces versatile filters to construct efficient convolutional neural networks.
We conduct theoretical analysis on network complexity and an efficient convolution scheme is introduced.
Experimental results on benchmark datasets and neural networks demonstrate that our versatile filters are able to achieve comparable accuracy as that of original filters.
- Score: 125.34595948003745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces versatile filters to construct efficient convolutional
neural networks that are widely used in various visual recognition tasks.
Considering the demands of efficient deep learning techniques running on
cost-effective hardware, a number of methods have been developed to learn
compact neural networks. Most of these works aim to slim down filters in
different ways, \eg,~investigating small, sparse or quantized filters. In
contrast, we treat filters from an additive perspective. A series of secondary
filters can be derived from a primary filter with the help of binary masks.
These secondary filters all inherit in the primary filter without occupying
more storage, but once been unfolded in computation they could significantly
enhance the capability of the filter by integrating information extracted from
different receptive fields. Besides spatial versatile filters, we additionally
investigate versatile filters from the channel perspective. Binary masks can be
further customized for different primary filters under orthogonal constraints.
We conduct theoretical analysis on network complexity and an efficient
convolution scheme is introduced. Experimental results on benchmark datasets
and neural networks demonstrate that our versatile filters are able to achieve
comparable accuracy as that of original filters, but require less memory and
computation cost.
Related papers
- As large as it gets: Learning infinitely large Filters via Neural Implicit Functions in the Fourier Domain [22.512062422338914]
Recent work in neural networks for image classification has seen a strong tendency towards increasing the spatial context.
We propose a module for studying the effective filter size of convolutional neural networks.
Our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters are well localized and relatively small in practice.
arXiv Detail & Related papers (2023-07-19T14:21:11Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Efficient CNNs via Passive Filter Pruning [23.661189257759535]
Convolutional neural networks (CNNs) have shown state-of-the-art performance in various applications.
CNNs are resource-hungry due to their requirement of high computational complexity and memory storage.
Recent efforts toward achieving computational efficiency in CNNs involve filter pruning methods.
arXiv Detail & Related papers (2023-04-05T09:19:19Z) - Improve Convolutional Neural Network Pruning by Maximizing Filter
Variety [0.0]
Neural network pruning is a widely used strategy for reducing model storage and computing requirements.
Common pruning criteria, such as l1-norm or movement, usually do not consider the individual utility of filters.
We present a technique solving those two issues, and which can be appended to any pruning criteria.
arXiv Detail & Related papers (2022-03-11T09:00:59Z) - Direct design of biquad filter cascades with deep learning by sampling
random polynomials [5.1118282767275005]
In this work, we learn a direct mapping from the target magnitude response to the filter coefficient space with a neural network trained on millions of random filters.
We demonstrate our approach enables both fast and accurate estimation of filter coefficients given a desired response.
We compare our method against existing methods including modified Yule-Walker and gradient descent and show IIRNet is, on average, both faster and more accurate.
arXiv Detail & Related papers (2021-10-07T17:58:08Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z) - Filter Grafting for Deep Neural Networks [71.39169475500324]
Filter grafting aims to improve the representation capability of Deep Neural Networks (DNNs)
We develop an entropy-based criterion to measure the information of filters and an adaptive weighting strategy for balancing the grafted information among networks.
For example, the grafted MobileNetV2 outperforms the non-grafted MobileNetV2 by about 7 percent on CIFAR-100 dataset.
arXiv Detail & Related papers (2020-01-15T03:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.