Data-Free Knowledge Distillation with Soft Targeted Transfer Set
Synthesis
- URL: http://arxiv.org/abs/2104.04868v1
- Date: Sat, 10 Apr 2021 22:42:14 GMT
- Title: Data-Free Knowledge Distillation with Soft Targeted Transfer Set
Synthesis
- Authors: Zi Wang
- Abstract summary: Knowledge distillation (KD) has proved to be an effective approach for deep neural network compression.
In traditional KD, the transferred knowledge is usually obtained by feeding training samples to the teacher network.
The original training dataset is not always available due to storage costs or privacy issues.
We propose a novel data-free KD approach by modeling the intermediate feature space of the teacher.
- Score: 8.87104231451079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge distillation (KD) has proved to be an effective approach for deep
neural network compression, which learns a compact network (student) by
transferring the knowledge from a pre-trained, over-parameterized network
(teacher). In traditional KD, the transferred knowledge is usually obtained by
feeding training samples to the teacher network to obtain the class
probabilities. However, the original training dataset is not always available
due to storage costs or privacy issues. In this study, we propose a novel
data-free KD approach by modeling the intermediate feature space of the teacher
with a multivariate normal distribution and leveraging the soft targeted labels
generated by the distribution to synthesize pseudo samples as the transfer set.
Several student networks trained with these synthesized transfer sets present
competitive performance compared to the networks trained with the original
training set and other data-free KD approaches.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.