NORM: Knowledge Distillation via N-to-One Representation Matching
- URL: http://arxiv.org/abs/2305.13803v1
- Date: Tue, 23 May 2023 08:15:45 GMT
- Title: NORM: Knowledge Distillation via N-to-One Representation Matching
- Authors: Xiaolong Liu, Lujun Li, Chao Li, Anbang Yao
- Abstract summary: We present a new two-stage knowledge distillation method, which relies on a simple Feature Transform (FT) module consisting of two linear layers.
In view of preserving the intact information learnt by the teacher network, our FT module is merely inserted after the last convolutional layer of the student network.
By sequentially splitting the expanded student representation into N non-overlapping feature segments having the same number of feature channels as the teacher's, they can be readily forced to approximate the intact teacher representation simultaneously.
- Score: 18.973254404242507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing feature distillation methods commonly adopt the One-to-one
Representation Matching between any pre-selected teacher-student layer pair. In
this paper, we present N-to-One Representation (NORM), a new two-stage
knowledge distillation method, which relies on a simple Feature Transform (FT)
module consisting of two linear layers. In view of preserving the intact
information learnt by the teacher network, during training, our FT module is
merely inserted after the last convolutional layer of the student network. The
first linear layer projects the student representation to a feature space
having N times feature channels than the teacher representation from the last
convolutional layer, and the second linear layer contracts the expanded output
back to the original feature space. By sequentially splitting the expanded
student representation into N non-overlapping feature segments having the same
number of feature channels as the teacher's, they can be readily forced to
approximate the intact teacher representation simultaneously, formulating a
novel many-to-one representation matching mechanism conditioned on a single
teacher-student layer pair. After training, such an FT module will be naturally
merged into the subsequent fully connected layer thanks to its linear property,
introducing no extra parameters or architectural modifications to the student
network at inference. Extensive experiments on different visual recognition
benchmarks demonstrate the leading performance of our method. For instance, the
ResNet18|MobileNet|ResNet50-1/4 model trained by NORM reaches
72.14%|74.26%|68.03% top-1 accuracy on the ImageNet dataset when using a
pre-trained ResNet34|ResNet50|ResNet50 model as the teacher, achieving an
absolute improvement of 2.01%|4.63%|3.03% against the individually trained
counterpart. Code is available at https://github.com/OSVAI/NORM
Related papers
- Visualising Feature Learning in Deep Neural Networks by Diagonalizing the Forward Feature Map [4.776836972093627]
We present a method for analysing feature learning by decomposing deep neural networks (DNNs)
We find that DNNs converge to a minimal feature (MF) regime dominated by a number of eigenfunctions equal to the number of classes.
We recast the phenomenon of neural collapse into a kernel picture which can be extended to broader tasks such as regression.
arXiv Detail & Related papers (2024-10-05T18:53:48Z) - ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models [9.96121040675476]
This manuscript explores how properties of functions learned by neural networks of depth greater than two layers affect predictions.
Our framework considers a family of networks of varying depths that all have the same capacity but different representation costs.
arXiv Detail & Related papers (2023-05-24T22:10:12Z) - A Simple and Generic Framework for Feature Distillation via Channel-wise
Transformation [35.233203757760066]
We propose a learnable nonlinear channel-wise transformation to align the features of the student and the teacher model.
Our method achieves significant performance improvements in various computer vision tasks.
arXiv Detail & Related papers (2023-03-23T12:13:29Z) - Improved Convergence Guarantees for Shallow Neural Networks [91.3755431537592]
We prove convergence of depth 2 neural networks, trained via gradient descent, to a global minimum.
Our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances, adversarial labels.
They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the NTK regime''
arXiv Detail & Related papers (2022-12-05T14:47:52Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural
Networks [6.8080936803807734]
Existing knowledge distillation methods on graph neural networks (GNNs) are almost offline.
We propose a novel online knowledge distillation framework to resolve this problem.
We develop a cross-layer distillation strategy by aligning ahead one student layer with the layer in different depth of another student model.
arXiv Detail & Related papers (2022-05-05T06:48:13Z) - Graph Consistency based Mean-Teaching for Unsupervised Domain Adaptive
Person Re-Identification [54.58165777717885]
This paper proposes a Graph Consistency based Mean-Teaching (GCMT) method with constructing the Graph Consistency Constraint (GCC) between teacher and student networks.
Experiments on three datasets, i.e., Market-1501, DukeMTMCreID, and MSMT17, show that proposed GCMT outperforms state-of-the-art methods by clear margin.
arXiv Detail & Related papers (2021-05-11T04:09:49Z) - Knowledge Distillation By Sparse Representation Matching [107.87219371697063]
We propose Sparse Representation Matching (SRM) to transfer intermediate knowledge from one Convolutional Network (CNN) to another by utilizing sparse representation.
We formulate as a neural processing block, which can be efficiently optimized using gradient descent and integrated into any CNN in a plug-and-play manner.
Our experiments demonstrate that is robust to architectural differences between the teacher and student networks, and outperforms other KD techniques across several datasets.
arXiv Detail & Related papers (2021-03-31T11:47:47Z) - Train your classifier first: Cascade Neural Networks Training from upper
layers to lower layers [54.47911829539919]
We develop a novel top-down training method which can be viewed as an algorithm for searching for high-quality classifiers.
We tested this method on automatic speech recognition (ASR) tasks and language modelling tasks.
The proposed method consistently improves recurrent neural network ASR models on Wall Street Journal, self-attention ASR models on Switchboard, and AWD-LSTM language models on WikiText-2.
arXiv Detail & Related papers (2021-02-09T08:19:49Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.