Function-Consistent Feature Distillation
- URL: http://arxiv.org/abs/2304.11832v1
- Date: Mon, 24 Apr 2023 05:43:29 GMT
- Title: Function-Consistent Feature Distillation
- Authors: Dongyang Liu, Meina Kan, Shiguang Shan, Xilin Chen
- Abstract summary: Feature distillation makes the student mimic the intermediate features of the teacher.
We propose Function-Consistent Feature Distillation (FCFD), which explicitly optimize the functional similarity between teacher and student features.
- Score: 99.0460424124249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature distillation makes the student mimic the intermediate features of the
teacher. Nearly all existing feature-distillation methods use L2 distance or
its slight variants as the distance metric between teacher and student
features. However, while L2 distance is isotropic w.r.t. all dimensions, the
neural network's operation on different dimensions is usually anisotropic,
i.e., perturbations with the same 2-norm but in different dimensions of
intermediate features lead to changes in the final output with largely
different magnitude. Considering this, we argue that the similarity between
teacher and student features should not be measured merely based on their
appearance (i.e., L2 distance), but should, more importantly, be measured by
their difference in function, namely how later layers of the network will read,
decode, and process them. Therefore, we propose Function-Consistent Feature
Distillation (FCFD), which explicitly optimizes the functional similarity
between teacher and student features. The core idea of FCFD is to make teacher
and student features not only numerically similar, but more importantly produce
similar outputs when fed to the later part of the same network. With FCFD, the
student mimics the teacher more faithfully and learns more from the teacher.
Extensive experiments on image classification and object detection demonstrate
the superiority of FCFD to existing methods. Furthermore, we can combine FCFD
with many existing methods to obtain even higher accuracy. Our codes are
available at https://github.com/LiuDongyang6/FCFD.
Related papers
- Improving Knowledge Distillation via Regularizing Feature Norm and
Direction [16.98806338782858]
Knowledge distillation (KD) exploits a large well-trained model (i.e., teacher) to train a small student model on the same dataset for the same task.
Treating teacher features as knowledge, prevailing methods of knowledge distillation train student by aligning its features with the teacher's, e.g., by minimizing the KL-divergence between their logits or L2 distance between their intermediate features.
While it is natural to believe that better alignment of student features to the teacher better distills teacher knowledge, simply forcing this alignment does not directly contribute to the student's performance, e.g.
arXiv Detail & Related papers (2023-05-26T15:05:19Z) - NORM: Knowledge Distillation via N-to-One Representation Matching [18.973254404242507]
We present a new two-stage knowledge distillation method, which relies on a simple Feature Transform (FT) module consisting of two linear layers.
In view of preserving the intact information learnt by the teacher network, our FT module is merely inserted after the last convolutional layer of the student network.
By sequentially splitting the expanded student representation into N non-overlapping feature segments having the same number of feature channels as the teacher's, they can be readily forced to approximate the intact teacher representation simultaneously.
arXiv Detail & Related papers (2023-05-23T08:15:45Z) - Switchable Online Knowledge Distillation [68.2673580932132]
Online Knowledge Distillation (OKD) improves involved models by reciprocally exploiting the difference between teacher and student.
We propose Switchable Online Knowledge Distillation (SwitOKD) to answer these questions.
arXiv Detail & Related papers (2022-09-12T03:03:40Z) - PKD: General Distillation Framework for Object Detectors via Pearson
Correlation Coefficient [18.782520279344553]
This paper empirically find that better FPN features from a heterogeneous teacher detector can help the student.
We propose to imitate features with Pearson Correlation Coefficient to focus on the relational information from the teacher.
Our method consistently outperforms the existing detection KD methods and works for both homogeneous and heterogeneous student-teacher pairs.
arXiv Detail & Related papers (2022-07-05T13:37:34Z) - Exploring Inter-Channel Correlation for Diversity-preserved
KnowledgeDistillation [91.56643684860062]
Inter-Channel Correlation for Knowledge Distillation(ICKD) is developed.
ICKD captures intrinsic distribution of the featurespace and sufficient diversity properties of features in the teacher network.
We are the first method based on knowl-edge distillation boosts ResNet18 beyond 72% Top-1 ac-curacy on ImageNet classification.
arXiv Detail & Related papers (2022-02-08T07:01:56Z) - MHFC: Multi-Head Feature Collaboration for Few-Shot Learning [17.699793591135904]
Few-shot learning aims to address the data-scarce problem.
We propose Multi-Head Feature Collaboration (MHFC) algorithm, which attempts to project the multi-head features to a unified space.
We evaluate the proposed method on five benchmark datasets and achieve significant improvements of 2.1%-7.8% compared with state-of-the-arts.
arXiv Detail & Related papers (2021-09-16T08:09:35Z) - Distilling Knowledge via Knowledge Review [69.15050871776552]
We study the factor of connection path cross levels between teacher and student networks, and reveal its great importance.
For the first time in knowledge distillation, cross-stage connection paths are proposed.
Our finally designed nested and compact framework requires negligible overhead, and outperforms other methods on a variety of tasks.
arXiv Detail & Related papers (2021-04-19T04:36:24Z) - Differentiable Feature Aggregation Search for Knowledge Distillation [47.94874193183427]
We introduce the feature aggregation to imitate the multi-teacher distillation in the single-teacher distillation framework.
DFA is a two-stage Differentiable Feature Aggregation search method motivated by DARTS in neural architecture search.
Experimental results show that DFA outperforms existing methods on CIFAR-100 and CINIC-10 datasets.
arXiv Detail & Related papers (2020-08-02T15:42:29Z) - iffDetector: Inference-aware Feature Filtering for Object Detection [70.8678270164057]
We introduce a generic Inference-aware Feature Filtering (IFF) module that can easily be combined with modern detectors.
IFF performs closed-loop optimization by leveraging high-level semantics to enhance the convolutional features.
IFF can be fused with CNN-based object detectors in a plug-and-play manner with negligible computational cost overhead.
arXiv Detail & Related papers (2020-06-23T02:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.