Paoding: Supervised Robustness-preserving Data-free Neural Network
Pruning
- URL: http://arxiv.org/abs/2204.00783v1
- Date: Sat, 2 Apr 2022 07:09:17 GMT
- Title: Paoding: Supervised Robustness-preserving Data-free Neural Network
Pruning
- Authors: Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Jin Song Dong
- Abstract summary: We study the neural network pruning in the emphdata-free context.
We replace the traditional aggressive one-shot strategy with a conservative one that treats the pruning as a progressive process.
Our method is implemented as a Python package named textscPaoding and evaluated with a series of experiments on diverse neural network models.
- Score: 3.6953655494795776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When deploying pre-trained neural network models in real-world applications,
model consumers often encounter resource-constraint platforms such as mobile
and smart devices. They typically use the pruning technique to reduce the size
and complexity of the model, generating a lighter one with less resource
consumption. Nonetheless, most existing pruning methods are proposed with a
premise that the model after being pruned has a chance to be fine-tuned or even
retrained based on the original training data. This may be unrealistic in
practice, as the data controllers are often reluctant to provide their model
consumers with the original data. In this work, we study the neural network
pruning in the \emph{data-free} context, aiming to yield lightweight models
that are not only accurate in prediction but also robust against undesired
inputs in open-world deployments. Considering the absence of the fine-tuning
and retraining that can fix the mis-pruned units, we replace the traditional
aggressive one-shot strategy with a conservative one that treats the pruning as
a progressive process. We propose a pruning method based on stochastic
optimization that uses robustness-related metrics to guide the pruning process.
Our method is implemented as a Python package named \textsc{Paoding} and
evaluated with a series of experiments on diverse neural network models. The
experimental results show that it significantly outperforms existing one-shot
data-free pruning approaches in terms of robustness preservation and accuracy.
Related papers
- Efficient Model Compression for Bayesian Neural Networks [4.179545514579061]
We demonstrate a novel strategy to emulate principles of Bayesian model selection in a deep learning setup.
We employ these probabilities for pruning and feature selection on a host of simulated and real-world benchmark data.
arXiv Detail & Related papers (2024-11-01T00:07:59Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - PAODING: A High-fidelity Data-free Pruning Toolkit for Debloating Pre-trained Neural Networks [11.600305034972996]
PAODING is a toolkit to debloat pretrained neural network models through the lens of data-free pruning.
It can significantly reduce the model size and generalize on different datasets and models.
It can also preserve the model fidelity in terms of test accuracy and adversarial robustness.
arXiv Detail & Related papers (2024-04-30T07:24:41Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Distributed Pruning Towards Tiny Neural Networks in Federated Learning [12.63559789381064]
FedTiny is a distributed pruning framework for federated learning.
It generates specialized tiny models for memory- and computing-constrained devices.
It achieves an accuracy improvement of 2.61% while significantly reducing the computational cost by 95.91%.
arXiv Detail & Related papers (2022-12-05T01:58:45Z) - Interpretations Steered Network Pruning via Amortized Inferred Saliency
Maps [85.49020931411825]
Convolutional Neural Networks (CNNs) compression is crucial to deploying these models in edge devices with limited resources.
We propose to address the channel pruning problem from a novel perspective by leveraging the interpretations of a model to steer the pruning process.
We tackle this challenge by introducing a selector model that predicts real-time smooth saliency masks for pruned models.
arXiv Detail & Related papers (2022-09-07T01:12:11Z) - DeepBayes -- an estimator for parameter estimation in stochastic
nonlinear dynamical models [11.917949887615567]
We propose DeepBayes estimators that leverage the power of deep recurrent neural networks in learning an estimator.
The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference.
We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-04T18:12:17Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
Accuracy [42.15969584135412]
Neural network pruning is a popular technique used to reduce the inference costs of modern networks.
We evaluate whether the use of test accuracy alone in the terminating condition is sufficient to ensure that the resulting model performs well.
We find that pruned networks effectively approximate the unpruned model, however, the prune ratio at which pruned networks achieve commensurate performance varies significantly across tasks.
arXiv Detail & Related papers (2021-03-04T13:22:16Z) - Dynamic Model Pruning with Feedback [64.019079257231]
We propose a novel model compression method that generates a sparse trained model without additional overhead.
We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models.
arXiv Detail & Related papers (2020-06-12T15:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.