Towards Accurate Quantization and Pruning via Data-free Knowledge
Transfer
- URL: http://arxiv.org/abs/2010.07334v1
- Date: Wed, 14 Oct 2020 18:02:55 GMT
- Title: Towards Accurate Quantization and Pruning via Data-free Knowledge
Transfer
- Authors: Chen Zhu, Zheng Xu, Ali Shafahi, Manli Shu, Amin Ghiasi, Tom Goldstein
- Abstract summary: We study data-free quantization and pruning by transferring knowledge from trained large networks to compact networks.
Our data-free compact networks achieve competitive accuracy to networks trained and fine-tuned with training data.
- Score: 61.85316480370141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When large scale training data is available, one can obtain compact and
accurate networks to be deployed in resource-constrained environments
effectively through quantization and pruning. However, training data are often
protected due to privacy concerns and it is challenging to obtain compact
networks without data. We study data-free quantization and pruning by
transferring knowledge from trained large networks to compact networks.
Auxiliary generators are simultaneously and adversarially trained with the
targeted compact networks to generate synthetic inputs that maximize the
discrepancy between the given large network and its quantized or pruned
version. We show theoretically that the alternating optimization for the
underlying minimax problem converges under mild conditions for pruning and
quantization. Our data-free compact networks achieve competitive accuracy to
networks trained and fine-tuned with training data. Our quantized and pruned
networks achieve good performance while being more compact and lightweight.
Further, we demonstrate that the compact structure and corresponding
initialization from the Lottery Ticket Hypothesis can also help in data-free
training.
Related papers
- Robustness to distribution shifts of compressed networks for edge
devices [6.606005367624169]
It is important to investigate the robustness of compressed networks in two types of data distribution shifts: domain shifts and adversarial perturbations.
In this study, we discover that compressed models are less robust to distribution shifts than their original networks.
compact networks obtained by knowledge distillation are much more robust to distribution shifts than pruned networks.
arXiv Detail & Related papers (2024-01-22T15:00:32Z) - Optimal transfer protocol by incremental layer defrosting [66.76153955485584]
Transfer learning is a powerful tool enabling model training with limited amounts of data.
The simplest transfer learning protocol is based on freezing" the feature-extractor layers of a network pre-trained on a data-rich source task.
We show that this protocol is often sub-optimal and the largest performance gain may be achieved when smaller portions of the pre-trained network are kept frozen.
arXiv Detail & Related papers (2023-03-02T17:32:11Z) - Convolutional Network Fabric Pruning With Label Noise [0.0]
This paper presents an iterative pruning strategy for Convolutional Network Fabrics (CNF) in presence of noisy training and testing data.
Because of their intrinsic structure and function, Convolutional Network Fabrics are ideal candidates for pruning.
arXiv Detail & Related papers (2022-02-15T09:24:08Z) - Semi-supervised Network Embedding with Differentiable Deep Quantisation [81.49184987430333]
We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
arXiv Detail & Related papers (2021-08-20T11:53:05Z) - Benchmarking quantum tomography completeness and fidelity with machine
learning [0.0]
We train convolutional neural networks to predict whether or not a set of measurements is informationally complete to uniquely reconstruct any given quantum state with no prior information.
Networks are trained to recognize the fidelity and a reliable measure for informational completeness.
arXiv Detail & Related papers (2021-03-02T07:30:32Z) - Mixed-Privacy Forgetting in Deep Networks [114.3840147070712]
We show that the influence of a subset of the training samples can be removed from the weights of a network trained on large-scale image classification tasks.
Inspired by real-world applications of forgetting techniques, we introduce a novel notion of forgetting in mixed-privacy setting.
We show that our method allows forgetting without having to trade off the model accuracy.
arXiv Detail & Related papers (2020-12-24T19:34:56Z) - Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive
Meta-Pruning [83.59005356327103]
A common limitation of most existing pruning techniques is that they require pre-training of the network at least once before pruning.
We propose STAMP, which task-adaptively prunes a network pretrained on a large reference dataset by generating a pruning mask on it as a function of the target dataset.
We validate STAMP against recent advanced pruning methods on benchmark datasets.
arXiv Detail & Related papers (2020-06-22T10:57:43Z) - Fault Handling in Large Water Networks with Online Dictionary Learning [1.933681537640272]
Here we simplify the model by offering a data driven alternative that takes the network topology into account when performing sensor placement.
Online learning is fast and allows tackling large networks as it processes small batches of signals at a time.
The algorithms show good performance when tested on both small and large-scale networks.
arXiv Detail & Related papers (2020-03-18T21:46:14Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.