Pruning of Convolutional Neural Networks Using Ising Energy Model
- URL: http://arxiv.org/abs/2102.05437v1
- Date: Wed, 10 Feb 2021 14:00:39 GMT
- Title: Pruning of Convolutional Neural Networks Using Ising Energy Model
- Authors: Hojjat Salehinejad and Shahrokh Valaee
- Abstract summary: We propose an Ising energy model within an optimization framework for pruning convolutional kernels and hidden units.
Our experiments using ResNets, AlexNet, and SqueezeNet on CIFAR-10 and CIFAR-100 datasets show that the proposed method on average can achieve a pruning rate of more than $50%$ of the trainable parameters.
- Score: 45.4796383952516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pruning is one of the major methods to compress deep neural networks. In this
paper, we propose an Ising energy model within an optimization framework for
pruning convolutional kernels and hidden units. This model is designed to
reduce redundancy between weight kernels and detect inactive kernels/hidden
units. Our experiments using ResNets, AlexNet, and SqueezeNet on CIFAR-10 and
CIFAR-100 datasets show that the proposed method on average can achieve a
pruning rate of more than $50\%$ of the trainable parameters with approximately
$<10\%$ and $<5\%$ drop of Top-1 and Top-5 classification accuracy,
respectively.
Related papers
- Towards Generalized Entropic Sparsification for Convolutional Neural Networks [0.0]
Convolutional neural networks (CNNs) are reported to be overparametrized.
Here, we introduce a layer-by-layer data-driven pruning method based on the mathematical idea aiming at a computationally-scalable entropic relaxation of the pruning problem.
The sparse subnetwork is found from the pre-trained (full) CNN using the network entropy minimization as a sparsity constraint.
arXiv Detail & Related papers (2024-04-06T21:33:39Z) - Rewarded meta-pruning: Meta Learning with Rewards for Channel Pruning [19.978542231976636]
This paper proposes a novel method to reduce the parameters and FLOPs for computational efficiency in deep learning models.
We introduce accuracy and efficiency coefficients to control the trade-off between the accuracy of the network and its computing efficiency.
arXiv Detail & Related papers (2023-01-26T12:32:01Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Pruning Neural Networks with Interpolative Decompositions [5.377278489623063]
We introduce a principled approach to neural network pruning that casts the problem as a structured low-rank matrix approximation.
We demonstrate how to prune a neural network by first building a set of primitives to prune a single fully connected or convolution layer.
We achieve an accuracy of 93.62 $pm$ 0.36% using VGG-16 on CIFAR-10, with a 51% FLOPS reduction.
arXiv Detail & Related papers (2021-07-30T20:13:49Z) - Toward Compact Deep Neural Networks via Energy-Aware Pruning [2.578242050187029]
We propose a novel energy-aware pruning method that quantifies the importance of each filter in the network using nuclear-norm (NN)
We achieve competitive results with 40.4/49.8% of FLOPs and 45.9/52.9% of parameter reduction with 94.13/94.61% in the Top-1 accuracy with ResNet-56/110 on CIFAR-10.
arXiv Detail & Related papers (2021-03-19T15:33:16Z) - A Framework For Pruning Deep Neural Networks Using Energy-Based Models [45.4796383952516]
A typical deep neural network (DNN) has a large number of trainable parameters.
We propose a framework for pruning DNNs based on a population-based global optimization method.
Experiments on ResNets, AlexNet, and SqueezeNet show a pruning rate of more than $50%$ of the trainable parameters.
arXiv Detail & Related papers (2021-02-25T21:44:19Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.