Shapley Value as Principled Metric for Structured Network Pruning
- URL: http://arxiv.org/abs/2006.01795v1
- Date: Tue, 2 Jun 2020 17:26:49 GMT
- Title: Shapley Value as Principled Metric for Structured Network Pruning
- Authors: Marco Ancona and Cengiz \"Oztireli and Markus Gross
- Abstract summary: Structured pruning is a technique to reduce the storage size and inference cost of neural networks.
We show that reducing the harm caused by pruning becomes crucial to retain the performance of the network.
We propose Shapley values as a principled ranking metric for this task.
- Score: 10.96182578337852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structured pruning is a well-known technique to reduce the storage size and
inference cost of neural networks. The usual pruning pipeline consists of
ranking the network internal filters and activations with respect to their
contributions to the network performance, removing the units with the lowest
contribution, and fine-tuning the network to reduce the harm induced by
pruning. Recent results showed that random pruning performs on par with other
metrics, given enough fine-tuning resources. In this work, we show that this is
not true on a low-data regime when fine-tuning is either not possible or not
effective. In this case, reducing the harm caused by pruning becomes crucial to
retain the performance of the network. First, we analyze the problem of
estimating the contribution of hidden units with tools suggested by cooperative
game theory and propose Shapley values as a principled ranking metric for this
task. We compare with several alternatives proposed in the literature and
discuss how Shapley values are theoretically preferable. Finally, we compare
all ranking metrics on the challenging scenario of low-data pruning, where we
demonstrate how Shapley values outperform other heuristics.
Related papers
- Shapley Pruning for Neural Network Compression [63.60286036508473]
This work presents the Shapley value approximations, and performs the comparative analysis in terms of cost-benefit utility for the neural network compression.
The proposed normative ranking and its approximations show practical results, obtaining state-of-the-art network compression.
arXiv Detail & Related papers (2024-07-19T11:42:54Z) - How Sparse Can We Prune A Deep Network: A Fundamental Limit Viewpoint [3.7575861326462845]
Network pruning is an effective measure to alleviate the storage and computational burden of deep neural networks.
We take a first principles approach, i.e. we impose the sparsity constraint on the original loss function.
We identify two key factors that determine the pruning ratio limit.
arXiv Detail & Related papers (2023-06-09T12:39:41Z) - Network Pruning Spaces [12.692532576302426]
Network pruning techniques, including weight pruning and filter pruning, reveal that most state-of-the-art neural networks can be accelerated without a significant performance drop.
This work focuses on filter pruning which enables accelerated inference with any off-the-shelf deep learning library and hardware.
arXiv Detail & Related papers (2023-04-19T06:52:05Z) - The Unreasonable Effectiveness of Random Pruning: Return of the Most
Naive Baseline for Sparse Training [111.15069968583042]
Random pruning is arguably the most naive way to attain sparsity in neural networks, but has been deemed uncompetitive by either post-training pruning or sparse training.
We empirically demonstrate that sparsely training a randomly pruned network from scratch can match the performance of its dense equivalent.
Our results strongly suggest there is larger-than-expected room for sparse training at scale, and the benefits of sparsity might be more universal beyond carefully designed pruning.
arXiv Detail & Related papers (2022-02-05T21:19:41Z) - Connectivity Matters: Neural Network Pruning Through the Lens of
Effective Sparsity [0.0]
Neural network pruning is a fruitful area of research with surging interest in high sparsity regimes.
We show that effective compression of a randomly pruned LeNet-300-100 can be orders of magnitude larger than its direct counterpart.
We develop a low-cost extension to most pruning algorithms to aim for effective, rather than direct, sparsity.
arXiv Detail & Related papers (2021-07-05T22:36:57Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
Accuracy [42.15969584135412]
Neural network pruning is a popular technique used to reduce the inference costs of modern networks.
We evaluate whether the use of test accuracy alone in the terminating condition is sufficient to ensure that the resulting model performs well.
We find that pruned networks effectively approximate the unpruned model, however, the prune ratio at which pruned networks achieve commensurate performance varies significantly across tasks.
arXiv Detail & Related papers (2021-03-04T13:22:16Z) - Neural Pruning via Growing Regularization [82.9322109208353]
We extend regularization to tackle two central problems of pruning: pruning schedule and weight importance scoring.
Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains.
The proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning.
arXiv Detail & Related papers (2020-12-16T20:16:28Z) - Progressive Skeletonization: Trimming more fat from a network at
initialization [76.11947969140608]
We propose an objective to find a skeletonized network with maximum connection sensitivity.
We then propose two approximate procedures to maximize our objective.
Our approach provides remarkably improved performance on higher pruning levels.
arXiv Detail & Related papers (2020-06-16T11:32:47Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.