Computational optimization of convolutional neural networks using
separated filters architecture
- URL: http://arxiv.org/abs/2002.07754v1
- Date: Tue, 18 Feb 2020 17:42:13 GMT
- Title: Computational optimization of convolutional neural networks using
separated filters architecture
- Authors: Elena Limonova and Alexander Sheshkus and Dmitry Nikolaev
- Abstract summary: We consider a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing.
Use of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding.
- Score: 69.73393478582027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers a convolutional neural network transformation that
reduces computation complexity and thus speedups neural network processing.
Usage of convolutional neural networks (CNN) is the standard approach to image
recognition despite the fact they can be too computationally demanding, for
example for recognition on mobile platforms or in embedded systems. In this
paper we propose CNN structure transformation which expresses 2D convolution
filters as a linear combination of separable filters. It allows to obtain
separated convolutional filters by standard training algorithms. We study the
computation efficiency of this structure transformation and suggest fast
implementation easily handled by CPU or GPU. We demonstrate that CNNs designed
for letter and digit recognition of proposed structure show 15% speedup without
accuracy loss in industrial image recognition system. In conclusion, we discuss
the question of possible accuracy decrease and the application of proposed
transformation to different recognition problems. convolutional neural
networks, computational optimization, separable filters, complexity reduction.
Related papers
- Deep Convolutional Tables: Deep Learning without Convolutions [12.069186324544347]
We propose a novel formulation of deep networks that do not use dot-product neurons and rely on a hierarchy of voting tables instead.
Deep CT networks have been experimentally shown to have accuracy comparable to that of CNNs of similar architectures.
arXiv Detail & Related papers (2023-04-23T17:49:21Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Convolutional Analysis Operator Learning by End-To-End Training of
Iterative Neural Networks [3.6280929178575994]
We show how convolutional sparsifying filters can be efficiently learned by end-to-end training of iterative neural networks.
We evaluated our approach on a non-Cartesian 2D cardiac cine MRI example and show that the obtained filters are better suitable for the corresponding reconstruction algorithm than the ones obtained by decoupled pre-training.
arXiv Detail & Related papers (2022-03-04T07:32:16Z) - Content-Aware Convolutional Neural Networks [98.97634685964819]
Convolutional Neural Networks (CNNs) have achieved great success due to the powerful feature learning ability of convolution layers.
We propose a Content-aware Convolution (CAC) that automatically detects the smooth windows and applies a 1x1 convolutional kernel to replace the original large kernel.
arXiv Detail & Related papers (2021-06-30T03:54:35Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Improved CNN-based Learning of Interpolation Filters for Low-Complexity
Inter Prediction in Video Coding [5.46121027847413]
This paper introduces a novel explainable neural network-based inter-prediction scheme.
A novel training framework enables each network branch to resemble a specific fractional shift.
When implemented in the context of the Versatile Video Coding (VVC) test model, 0.77%, 1.27% and 2.25% BD-rate savings can be achieved.
arXiv Detail & Related papers (2021-06-16T16:48:01Z) - Implementing a foveal-pit inspired filter in a Spiking Convolutional
Neural Network: a preliminary study [0.0]
We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order encoding.
The model is trained using a variant of the backpropagation algorithm adapted to work with spiking neurons, as implemented in the Nengo library.
The network has achieved up to 90% accuracy, where loss is calculated using the cross-entropy function.
arXiv Detail & Related papers (2021-05-29T15:28:30Z) - FILTRA: Rethinking Steerable CNN by Filter Transform [59.412570807426135]
The problem of steerable CNN has been studied from aspect of group representation theory.
We show that kernel constructed by filter transform can also be interpreted in the group representation theory.
This interpretation help complete the puzzle of steerable CNN theory and provides a novel and simple approach to implement steerable convolution operators.
arXiv Detail & Related papers (2021-05-25T03:32:34Z) - Knowledge Distillation Circumvents Nonlinearity for Optical
Convolutional Neural Networks [4.683612295430957]
We propose a Spectral CNN Linear Counterpart (SCLC) network architecture and develop a Knowledge Distillation (KD) approach to circumvent the need for a nonlinearity.
We show that the KD approach can achieve performance that easily surpasses the standard linear version of a CNN and could approach the performance of the nonlinear network.
arXiv Detail & Related papers (2021-02-26T06:35:34Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.