Exploring the Performance of Pruning Methods in Neural Networks: An
Empirical Study of the Lottery Ticket Hypothesis
- URL: http://arxiv.org/abs/2303.15479v1
- Date: Sun, 26 Mar 2023 21:46:34 GMT
- Title: Exploring the Performance of Pruning Methods in Neural Networks: An
Empirical Study of the Lottery Ticket Hypothesis
- Authors: Eirik Fladmark, Muhammad Hamza Sajjad, Laura Brinkholm Justesen
- Abstract summary: We compare L1 unstructured pruning, Fisher pruning, and random pruning on different network architectures and pruning scenarios.
We propose and evaluate a new method for efficient computation of Fisher pruning, known as batched Fisher pruning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we explore the performance of different pruning methods in the
context of the lottery ticket hypothesis. We compare the performance of L1
unstructured pruning, Fisher pruning, and random pruning on different network
architectures and pruning scenarios. The experiments include an evaluation of
one-shot and iterative pruning, an examination of weight movement in the
network during pruning, a comparison of the pruning methods on networks of
varying widths, and an analysis of the performance of the methods when the
network becomes very sparse. Additionally, we propose and evaluate a new method
for efficient computation of Fisher pruning, known as batched Fisher pruning.
Related papers
- Sampling and active learning methods for network reliability estimation using K-terminal spanning tree [16.985964958558586]
Network reliability analysis remains a challenge due to the increasing size and complexity of networks.
This paper presents a novel sampling method and an active learning method for efficient and accurate network reliability estimation.
arXiv Detail & Related papers (2024-07-09T08:51:53Z) - Network Pruning Spaces [12.692532576302426]
Network pruning techniques, including weight pruning and filter pruning, reveal that most state-of-the-art neural networks can be accelerated without a significant performance drop.
This work focuses on filter pruning which enables accelerated inference with any off-the-shelf deep learning library and hardware.
arXiv Detail & Related papers (2023-04-19T06:52:05Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Revisiting Random Channel Pruning for Neural Network Compression [159.99002793644163]
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of neural networks.
In this paper, we try to determine the channel configuration of the pruned models by random search.
We show that this simple strategy works quite well compared with other channel pruning methods.
arXiv Detail & Related papers (2022-05-11T17:59:04Z) - Sparse Training via Boosting Pruning Plasticity with Neuroregeneration [79.78184026678659]
We study the effect of pruning throughout training from the perspective of pruning plasticity.
We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (GraNet) and its dynamic sparse training (DST) variant (GraNet-ST)
Perhaps most impressively, the latter for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods by a large margin with ResNet-50 on ImageNet.
arXiv Detail & Related papers (2021-06-19T02:09:25Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot [55.37967301483917]
Conventional wisdom of pruning algorithms suggests that pruning methods exploit information from training data to find goodworks.
In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods.
We propose a series of simple emphdata-independent prune ratios for each layer, and randomly prune each layer accordingly to get a subnetwork.
arXiv Detail & Related papers (2020-09-22T17:36:17Z) - Data-dependent Pruning to find the Winning Lottery Ticket [0.0]
Lottery Ticket Hypothesis postulates that a freshly neural network contains a small subnetwork that can be trained to achieve similar performance as the full network.
We conclude that incorporating a data dependent component into the pruning criterion consistently improves the performance of existing pruning algorithms.
arXiv Detail & Related papers (2020-06-25T12:48:34Z) - Shapley Value as Principled Metric for Structured Network Pruning [10.96182578337852]
Structured pruning is a technique to reduce the storage size and inference cost of neural networks.
We show that reducing the harm caused by pruning becomes crucial to retain the performance of the network.
We propose Shapley values as a principled ranking metric for this task.
arXiv Detail & Related papers (2020-06-02T17:26:49Z) - Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning [83.99191569112682]
Magnitude-based pruning is one of the simplest methods for pruning neural networks.
We develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization.
Our experimental results demonstrate that the proposed method consistently outperforms magnitude-based pruning on various networks.
arXiv Detail & Related papers (2020-02-12T05:38:42Z) - On Iterative Neural Network Pruning, Reinitialization, and the
Similarity of Masks [0.913755431537592]
We analyze differences in the connectivity structure and learning dynamics of pruned models found through a set of common iterative pruning techniques.
We show empirical evidence that weight stability can be automatically achieved through apposite pruning techniques.
arXiv Detail & Related papers (2020-01-14T21:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.