FlipOut: Uncovering Redundant Weights via Sign Flipping
- URL: http://arxiv.org/abs/2009.02594v1
- Date: Sat, 5 Sep 2020 20:27:32 GMT
- Title: FlipOut: Uncovering Redundant Weights via Sign Flipping
- Authors: Andrei Apostol, Maarten Stol, Patrick Forr\'e
- Abstract summary: We propose a novel pruning method which uses the oscillations around $0$ that a weight has undergone during training in order to determine its saliency.
Our method can perform pruning before the network has converged, requires little tuning effort, and can directly target the level of sparsity desired by the user.
Our experiments, performed on a variety of object classification architectures, show that it is competitive with existing methods and achieves state-of-the-art performance for levels of sparsity of $99.6%$ and above.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern neural networks, although achieving state-of-the-art results on many
tasks, tend to have a large number of parameters, which increases training time
and resource usage. This problem can be alleviated by pruning. Existing
methods, however, often require extensive parameter tuning or multiple cycles
of pruning and retraining to convergence in order to obtain a favorable
accuracy-sparsity trade-off. To address these issues, we propose a novel
pruning method which uses the oscillations around $0$ (i.e. sign flips) that a
weight has undergone during training in order to determine its saliency. Our
method can perform pruning before the network has converged, requires little
tuning effort due to having good default values for its hyperparameters, and
can directly target the level of sparsity desired by the user. Our experiments,
performed on a variety of object classification architectures, show that it is
competitive with existing methods and achieves state-of-the-art performance for
levels of sparsity of $99.6\%$ and above for most of the architectures tested.
For reproducibility, we release our code publicly at
https://github.com/AndreiXYZ/flipout.
Related papers
- Cyclic Sparse Training: Is it Enough? [17.02564269659367]
We propose SCULPT-ing, i.e., repeated cyclic training of any sparse mask followed by a single pruning step to couple the parameters and the mask.
This is able to match the performance of state-of-the-art iterative pruning methods in the high sparsity regime at reduced computational cost.
arXiv Detail & Related papers (2024-06-04T20:40:27Z) - DRIVE: Dual Gradient-Based Rapid Iterative Pruning [2.209921757303168]
Modern deep neural networks (DNNs) consist of millions of parameters, necessitating high-performance computing during training and inference.
Traditional pruning methods that are applied post-training focus on streamlining inference, but there are recent efforts to leverage sparsity early on by pruning before training.
We present Dual Gradient-Based Rapid Iterative Pruning (DRIVE), which leverages dense training for initial epochs to counteract the randomness inherent at the inception.
arXiv Detail & Related papers (2024-04-01T20:44:28Z) - Parameter-efficient Tuning of Large-scale Multimodal Foundation Model [68.24510810095802]
We propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges.
Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal prompt tuning.
A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach.
arXiv Detail & Related papers (2023-05-15T06:40:56Z) - Learning a Consensus Sub-Network with Polarization Regularization and
One Pass Training [3.2214522506924093]
Pruning schemes create extra overhead either by iterative training and fine-tuning for static pruning or repeated computation of a dynamic pruning graph.
We propose a new parameter pruning strategy for learning a lighter-weight sub-network that minimizes the energy cost while maintaining comparable performance to the fully parameterised network on given downstream tasks.
Our results on CIFAR-10 and CIFAR-100 suggest that our scheme can remove 50% of connections in deep networks with less than 1% reduction in classification accuracy.
arXiv Detail & Related papers (2023-02-17T09:37:17Z) - Powerpropagation: A sparsity inducing weight reparameterisation [65.85142037667065]
We introduce Powerpropagation, a new weight- parameterisation for neural networks that leads to inherently sparse models.
Models trained in this manner exhibit similar performance, but have a distribution with markedly higher density at zero, allowing more parameters to be pruned safely.
Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark.
arXiv Detail & Related papers (2021-10-01T10:03:57Z) - Sparse Training via Boosting Pruning Plasticity with Neuroregeneration [79.78184026678659]
We study the effect of pruning throughout training from the perspective of pruning plasticity.
We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (GraNet) and its dynamic sparse training (DST) variant (GraNet-ST)
Perhaps most impressively, the latter for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods by a large margin with ResNet-50 on ImageNet.
arXiv Detail & Related papers (2021-06-19T02:09:25Z) - Towards Optimal Filter Pruning with Balanced Performance and Pruning
Speed [17.115185960327665]
We propose a balanced filter pruning method for both performance and pruning speed.
Our method is able to prune a layer with approximate layer-wise optimal pruning rate at preset loss variation.
The proposed pruning method is widely applicable to common architectures and does not involve any additional training except the final fine-tuning.
arXiv Detail & Related papers (2020-10-14T06:17:09Z) - Progressive Skeletonization: Trimming more fat from a network at
initialization [76.11947969140608]
We propose an objective to find a skeletonized network with maximum connection sensitivity.
We then propose two approximate procedures to maximize our objective.
Our approach provides remarkably improved performance on higher pruning levels.
arXiv Detail & Related papers (2020-06-16T11:32:47Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z) - Towards Practical Lottery Ticket Hypothesis for Adversarial Training [78.30684998080346]
We show there exists a subset of the aforementioned sub-networks that converge significantly faster during the training process.
As a practical application of our findings, we demonstrate that such sub-networks can help in cutting down the total time of adversarial training.
arXiv Detail & Related papers (2020-03-06T03:11:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.