Multi-perspective Contrastive Logit Distillation
- URL: http://arxiv.org/abs/2411.10693v2
- Date: Sat, 08 Mar 2025 09:45:21 GMT
- Title: Multi-perspective Contrastive Logit Distillation
- Authors: Qi Wang, Jinjia Zhou,
- Abstract summary: We introduce a novel and efficient logit distillation method, Multi-perspective Contrastive Logit Distillation (MCLD), which substantially improves the performance and efficacy of logit distillation.<n>MCLD attains state-of-the-art performance in image classification, and transfer learning tasks across multiple datasets, including CIFAR-100, ImageNet, Tiny-ImageNet, and STL-10.
- Score: 12.589031892370809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In previous studies on knowledge distillation, the significance of logit distillation has frequently been overlooked. To revitalize logit distillation, we present a novel perspective by reconsidering its computation based on the semantic properties of logits and exploring how to utilize it more efficiently. Logits often contain a substantial amount of high-level semantic information; however, the conventional approach of employing logits to compute Kullback-Leibler (KL) divergence does not account for their semantic properties. Furthermore, this direct KL divergence computation fails to fully exploit the potential of logits. To address these challenges, we introduce a novel and efficient logit distillation method, Multi-perspective Contrastive Logit Distillation (MCLD), which substantially improves the performance and efficacy of logit distillation. In comparison to existing logit distillation methods and complex feature distillation methods, MCLD attains state-of-the-art performance in image classification, and transfer learning tasks across multiple datasets, including CIFAR-100, ImageNet, Tiny-ImageNet, and STL-10. Additionally, MCLD exhibits superior training efficiency and outstanding performance with distilling on Vision Transformers, further emphasizing its notable advantages. This study unveils the vast potential of logits in knowledge distillation and seeks to offer valuable insights for future research.
Related papers
- Knowledge Distillation with Refined Logits [31.205248790623703]
We introduce Refined Logit Distillation (RLD) to address the limitations of current logit distillation methods.
Our approach is motivated by the observation that even high-performing teacher models can make incorrect predictions.
Our method can effectively eliminate misleading information from the teacher while preserving crucial class correlations.
arXiv Detail & Related papers (2024-08-14T17:59:32Z) - One Step Diffusion-based Super-Resolution with Time-Aware Distillation [60.262651082672235]
Diffusion-based image super-resolution (SR) methods have shown promise in reconstructing high-resolution images with fine details from low-resolution counterparts.
Recent techniques have been devised to enhance the sampling efficiency of diffusion-based SR models via knowledge distillation.
We propose a time-aware diffusion distillation method, named TAD-SR, to accomplish effective and efficient image super-resolution.
arXiv Detail & Related papers (2024-08-14T11:47:22Z) - Don't Throw Away Data: Better Sequence Knowledge Distillation [60.60698363739434]
In this paper we seek to integrate minimum Bayes risk (MBR) decoding more tightly in knowledge distillation training.
Our experiments on English to German and English to Japanese translation show consistent improvements over strong baseline methods.
arXiv Detail & Related papers (2024-07-15T06:11:18Z) - Bit-mask Robust Contrastive Knowledge Distillation for Unsupervised
Semantic Hashing [71.47723696190184]
We propose an innovative Bit-mask Robust Contrastive knowledge Distillation (BRCD) method for semantic hashing.
BRCD is specifically devised for the distillation of semantic hashing models.
arXiv Detail & Related papers (2024-03-10T03:33:59Z) - DistillCSE: Distilled Contrastive Learning for Sentence Embeddings [32.6620719893457]
This paper proposes the DistillCSE framework, which performs contrastive learning under the self-training paradigm with knowledge distillation.
The potential advantage of DistillCSE is its self-enhancing feature: using a base model to provide additional supervision signals, a stronger model may be learned through knowledge distillation.
The paper proposes two simple yet effective solutions for knowledge distillation: a Group-P shuffling strategy as an implicit regularization and the averaging logits from multiple teacher components.
arXiv Detail & Related papers (2023-10-20T13:45:59Z) - Distill Gold from Massive Ores: Bi-level Data Pruning towards Efficient Dataset Distillation [96.92250565207017]
We study the data efficiency and selection for the dataset distillation task.
By re-formulating the dynamics of distillation, we provide insight into the inherent redundancy in the real dataset.
We find the most contributing samples based on their causal effects on the distillation.
arXiv Detail & Related papers (2023-05-28T06:53:41Z) - A Survey on Recent Teacher-student Learning Studies [0.0]
Knowledge distillation is a method of transferring the knowledge from a complex deep neural network (DNN) to a smaller and faster DNN.
Recent variants of knowledge distillation include teaching assistant distillation, curriculum distillation, mask distillation, and decoupling distillation.
arXiv Detail & Related papers (2023-04-10T14:30:28Z) - Class-aware Information for Logit-based Knowledge Distillation [16.634819319915923]
We propose a Class-aware Logit Knowledge Distillation (CLKD) method, that extents the logit distillation in both instance-level and class-level.
CLKD enables the student model mimic higher semantic information from the teacher model, hence improving the distillation performance.
arXiv Detail & Related papers (2022-11-27T09:27:50Z) - A Novel Self-Knowledge Distillation Approach with Siamese Representation
Learning for Action Recognition [6.554259611868312]
Self-knowledge distillation is an effective transfer of knowledge from a heavy network (teacher) to a small network (student) to boost students' performance.
This paper introduces a novel Self-knowledge distillation approach via Siamese representation learning.
arXiv Detail & Related papers (2022-09-03T01:56:58Z) - Knowledge Distillation Meets Open-Set Semi-Supervised Learning [69.21139647218456]
We propose a novel em modelname (bfem shortname) method dedicated for distilling representational knowledge semantically from a pretrained teacher to a target student.
At the problem level, this establishes an interesting connection between knowledge distillation with open-set semi-supervised learning (SSL)
Our shortname outperforms significantly previous state-of-the-art knowledge distillation methods on both coarse object classification and fine face recognition tasks.
arXiv Detail & Related papers (2022-05-13T15:15:27Z) - Localization Distillation for Object Detection [134.12664548771534]
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the classification logits.
We present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student.
We show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years.
arXiv Detail & Related papers (2022-04-12T17:14:34Z) - Semi-Online Knowledge Distillation [2.373824287636486]
Conventional knowledge distillation (KD) is to transfer knowledge from a large and well pre-trained teacher network to a small student network.
Deep mutual learning (DML) has been proposed to help student networks learn collaboratively and simultaneously.
We propose a Semi-Online Knowledge Distillation (SOKD) method that effectively improves the performance of the student and the teacher.
arXiv Detail & Related papers (2021-11-23T09:44:58Z) - Pre-trained Summarization Distillation [121.14806854092672]
Recent work on distilling BERT for classification and regression tasks shows strong performance using direct knowledge distillation.
Alternatively, machine translation practitioners distill using pseudo-labeling, where a small model is trained on the translations of a larger model.
A third, simpler approach is to'shrink and fine-tune' (SFT), which avoids any explicit distillation by copying parameters to a smaller student model and then fine-tuning.
arXiv Detail & Related papers (2020-10-24T23:15:43Z) - Online Knowledge Distillation via Multi-branch Diversity Enhancement [15.523646047674717]
We propose a new distillation method to enhance the diversity among multiple student models.
We use Feature Fusion Module (FFM), which improves the performance of the attention mechanism in the network.
We also use Diversification(CD) loss function to strengthen the differences between the student models.
arXiv Detail & Related papers (2020-10-02T05:52:12Z) - Contrastive Distillation on Intermediate Representations for Language
Model Compression [89.31786191358802]
We propose Contrastive Distillation on Intermediate Representations (CoDIR) as a principled knowledge distillation framework.
By learning to distinguish positive sample from a large set of negative samples, CoDIR facilitates the student's exploitation of rich information in teacher's hidden layers.
CoDIR can be readily applied to compress large-scale language models in both pre-training and finetuning stages, and achieves superb performance on the GLUE benchmark.
arXiv Detail & Related papers (2020-09-29T17:31:43Z) - Knowledge Distillation Meets Self-Supervision [109.6400639148393]
Knowledge distillation involves extracting "dark knowledge" from a teacher network to guide the learning of a student network.
We show that the seemingly different self-supervision task can serve as a simple yet powerful solution.
By exploiting the similarity between those self-supervision signals as an auxiliary task, one can effectively transfer the hidden information from the teacher to the student.
arXiv Detail & Related papers (2020-06-12T12:18:52Z) - Why distillation helps: a statistical perspective [69.90148901064747]
Knowledge distillation is a technique for improving the performance of a simple "student" model.
While this simple approach has proven widely effective, a basic question remains unresolved: why does distillation help?
We show how distillation complements existing negative mining techniques for extreme multiclass retrieval.
arXiv Detail & Related papers (2020-05-21T01:49:51Z) - Residual Knowledge Distillation [96.18815134719975]
This work proposes Residual Knowledge Distillation (RKD), which further distills the knowledge by introducing an assistant (A)
In this way, S is trained to mimic the feature maps of T, and A aids this process by learning the residual error between them.
Experiments show that our approach achieves appealing results on popular classification datasets, CIFAR-100 and ImageNet.
arXiv Detail & Related papers (2020-02-21T07:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.