Efficient CNN Architecture Design Guided by Visualization
- URL: http://arxiv.org/abs/2207.10318v1
- Date: Thu, 21 Jul 2022 06:22:15 GMT
- Title: Efficient CNN Architecture Design Guided by Visualization
- Authors: Liangqi Zhang, Haibo Shen, Yihao Luo, Xiang Cao, Leixilan Pan,
Tianjiang Wang, Qi Feng
- Abstract summary: VGNetG-1.0MP achieves 67.7% top-1 accuracy with 0.99M parameters and 69.2% top-1 accuracy with 1.14M parameters on ImageNet classification dataset.
Our VGNetF-1.5MP archives 64.4%(-3.2%) top-1 accuracy and 66.2%(-1.4%) top-1 accuracy with additional Gaussian kernels.
- Score: 13.074652653088584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern efficient Convolutional Neural Networks(CNNs) always use Depthwise
Separable Convolutions(DSCs) and Neural Architecture Search(NAS) to reduce the
number of parameters and the computational complexity. But some inherent
characteristics of networks are overlooked. Inspired by visualizing feature
maps and N$\times$N(N$>$1) convolution kernels, several guidelines are
introduced in this paper to further improve parameter efficiency and inference
speed. Based on these guidelines, our parameter-efficient CNN architecture,
called \textit{VGNetG}, achieves better accuracy and lower latency than
previous networks with about 30%$\thicksim$50% parameters reduction. Our
VGNetG-1.0MP achieves 67.7% top-1 accuracy with 0.99M parameters and 69.2%
top-1 accuracy with 1.14M parameters on ImageNet classification dataset.
Furthermore, we demonstrate that edge detectors can replace learnable
depthwise convolution layers to mix features by replacing the N$\times$N
kernels with fixed edge detection kernels. And our VGNetF-1.5MP archives
64.4%(-3.2%) top-1 accuracy and 66.2%(-1.4%) top-1 accuracy with additional
Gaussian kernels.
Related papers
- KernelWarehouse: Rethinking the Design of Dynamic Convolution [16.101179962553385]
KernelWarehouse redefines the basic concepts of Kernels", assembling kernels" and attention function"
We testify the effectiveness of KernelWarehouse on ImageNet and MS-COCO datasets using various ConvNet architectures.
arXiv Detail & Related papers (2024-06-12T05:16:26Z) - Towards Generalized Entropic Sparsification for Convolutional Neural Networks [0.0]
Convolutional neural networks (CNNs) are reported to be overparametrized.
Here, we introduce a layer-by-layer data-driven pruning method based on the mathematical idea aiming at a computationally-scalable entropic relaxation of the pruning problem.
The sparse subnetwork is found from the pre-trained (full) CNN using the network entropy minimization as a sparsity constraint.
arXiv Detail & Related papers (2024-04-06T21:33:39Z) - EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for
Mobile Vision Applications [68.35683849098105]
We introduce split depth-wise transpose attention (SDTA) encoder that splits input tensors into multiple channel groups.
Our EdgeNeXt model with 1.3M parameters achieves 71.2% top-1 accuracy on ImageNet-1K.
Our EdgeNeXt model with 5.6M parameters achieves 79.4% top-1 accuracy on ImageNet-1K.
arXiv Detail & Related papers (2022-06-21T17:59:56Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Non-Parametric Adaptive Network Pruning [125.4414216272874]
We introduce non-parametric modeling to simplify the algorithm design.
Inspired by the face recognition community, we use a message passing algorithm to obtain an adaptive number of exemplars.
EPruner breaks the dependency on the training data in determining the "important" filters.
arXiv Detail & Related papers (2021-01-20T06:18:38Z) - Kernel Based Progressive Distillation for Adder Neural Networks [71.731127378807]
Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption.
There is an accuracy drop when replacing all convolution filters by adder filters.
We present a novel method for further improving the performance of ANNs without increasing the trainable parameters.
arXiv Detail & Related papers (2020-09-28T03:29:19Z) - Precision Gating: Improving Neural Network Efficiency with Dynamic
Dual-Precision Activations [22.71924873981158]
Precision gating (PG) is an end-to-end trainable dynamic dual-precision quantization technique for deep neural networks.
PG achieves excellent results on CNNs, including statically compressed mobile-friendly networks such as ShuffleNet.
Compared to 8-bit uniform quantization, PG obtains a 1.2% improvement in perplexity per word with 2.7$times$ computational cost reduction on LSTM.
arXiv Detail & Related papers (2020-02-17T18:54:37Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z) - Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks [9.409651543514615]
This work introduces convolutional layers with pre-defined sparse 2D kernels that have support sets that repeat periodically within and across filters.
Due to the efficient storage of our periodic sparse kernels, the parameter savings can translate into considerable improvements in energy efficiency.
arXiv Detail & Related papers (2020-01-29T07:10:56Z) - AdderNet: Do We Really Need Multiplications in Deep Learning? [159.174891462064]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks for much cheaper additions to reduce computation costs.
We develop a special back-propagation approach for AdderNets by investigating the full-precision gradient.
As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2019-12-31T06:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.