HARD: Hard Augmentations for Robust Distillation
- URL: http://arxiv.org/abs/2305.14890v2
- Date: Thu, 25 May 2023 10:57:46 GMT
- Title: HARD: Hard Augmentations for Robust Distillation
- Authors: Arne F. Nix, Max F. Burg, Fabian H. Sinz
- Abstract summary: We propose Hard Augmentations for Robust Distillation (HARD) to improve knowledge distillation.
HARD generates synthetic data points for which the teacher and the student disagree.
We find that our learned augmentations significantly improve KD performance on in-domain and out-of-domain evaluation.
- Score: 3.8397175894277225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge distillation (KD) is a simple and successful method to transfer
knowledge from a teacher to a student model solely based on functional
activity. However, current KD has a few shortcomings: it has recently been
shown that this method is unsuitable to transfer simple inductive biases like
shift equivariance, struggles to transfer out of domain generalization, and
optimization time is magnitudes longer compared to default non-KD model
training. To improve these aspects of KD, we propose Hard Augmentations for
Robust Distillation (HARD), a generally applicable data augmentation framework,
that generates synthetic data points for which the teacher and the student
disagree. We show in a simple toy example that our augmentation framework
solves the problem of transferring simple equivariances with KD. We then apply
our framework in real-world tasks for a variety of augmentation models, ranging
from simple spatial transformations to unconstrained image manipulations with a
pretrained variational autoencoder. We find that our learned augmentations
significantly improve KD performance on in-domain and out-of-domain evaluation.
Moreover, our method outperforms even state-of-the-art data augmentations and
since the augmented training inputs can be visualized, they offer a qualitative
insight into the properties that are transferred from the teacher to the
student. Thus HARD represents a generally applicable, dynamically optimized
data augmentation technique tailored to improve the generalization and
convergence speed of models trained with KD.
Related papers
- Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models [62.5501109475725]
Knowledge distillation (KD) is a technique that compresses large teacher models by training smaller student models to mimic them.
This paper introduces Online Knowledge Distillation (OKD), where the teacher network integrates small online modules to concurrently train with the student model.
OKD achieves or exceeds the performance of leading methods in various model architectures and sizes, reducing training time by up to fourfold.
arXiv Detail & Related papers (2024-09-19T07:05:26Z) - Relational Representation Distillation [6.24302896438145]
We introduce Representation Distillation (RRD) to explore and reinforce relationships between teacher and student models.
Inspired by self-supervised learning principles, it uses a relaxed contrastive loss that focuses on similarity than exact replication.
Our approach demonstrates superior performance on CIFAR-100 and ImageNet ILSVRC-2012 and sometimes even outperforms the teacher network when combined with KD.
arXiv Detail & Related papers (2024-07-16T14:56:13Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts [32.1016787150064]
Data-Free Knowledge Distillation (DFKD) is a promising task to train high-performance small models to enhance actual deployment without relying on the original training data.
Existing methods commonly avoid relying on private data by utilizing synthetic or sampled data.
This paper proposes a novel perspective with causal inference to disentangle the student models from the impact of such shifts.
arXiv Detail & Related papers (2024-03-28T16:13:22Z) - Revisiting Knowledge Distillation for Autoregressive Language Models [88.80146574509195]
We propose a simple yet effective adaptive teaching approach (ATKD) to improve the knowledge distillation (KD)
The core of ATKD is to reduce rote learning and make teaching more diverse and flexible.
Experiments on 8 LM tasks show that, with the help of ATKD, various baseline KD methods can achieve consistent and significant performance gains.
arXiv Detail & Related papers (2024-02-19T07:01:10Z) - Robustness-Reinforced Knowledge Distillation with Correlation Distance
and Network Pruning [3.1423836318272773]
Knowledge distillation (KD) improves the performance of efficient and lightweight models.
Most existing KD techniques rely on Kullback-Leibler (KL) divergence.
We propose a Robustness-Reinforced Knowledge Distillation (R2KD) that leverages correlation distance and network pruning.
arXiv Detail & Related papers (2023-11-23T11:34:48Z) - Comparative Knowledge Distillation [102.35425896967791]
Traditional Knowledge Distillation (KD) assumes readily available access to teacher models for frequent inference.
We propose Comparative Knowledge Distillation (CKD), which encourages student models to understand the nuanced differences in a teacher model's interpretations of samples.
CKD consistently outperforms state of the art data augmentation and KD techniques.
arXiv Detail & Related papers (2023-11-03T21:55:33Z) - Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free
Continual Learning [14.379472108242235]
We investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy.
KD-based methods are successfully used in CIL, but they often struggle to regularize the model without access to exemplars of the training data from previous tasks.
Inspired by recent test-time adaptation methods, we introduce Teacher Adaptation (TA), a method that concurrently updates the teacher and the main models during incremental training.
arXiv Detail & Related papers (2023-08-18T13:22:59Z) - Continual Learners are Incremental Model Generalizers [70.34479702177988]
This paper extensively studies the impact of Continual Learning (CL) models as pre-trainers.
We find that the transfer quality of the representation often increases gradually without noticeable degradation in fine-tuning performance.
We propose a new fine-tuning scheme, GLobal Attention Discretization (GLAD), that preserves rich task-generic representation during solving downstream tasks.
arXiv Detail & Related papers (2023-06-21T05:26:28Z) - CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge
Distillation [30.56389761245621]
Knowledge distillation (KD) is an efficient framework for compressing large-scale pre-trained language models.
Recent years have seen a surge of research aiming to improve KD by leveraging Contrastive Learning, Intermediate Layer Distillation, Data Augmentation, and Adrial Training.
We propose a learning based data augmentation technique tailored for knowledge distillation, called CILDA.
arXiv Detail & Related papers (2022-04-15T23:16:37Z) - MixKD: Towards Efficient Distillation of Large-scale Language Models [129.73786264834894]
We propose MixKD, a data-agnostic distillation framework, to endow the resulting model with stronger generalization ability.
We prove from a theoretical perspective that under reasonable conditions MixKD gives rise to a smaller gap between the error and the empirical error.
Experiments under a limited-data setting and ablation studies further demonstrate the advantages of the proposed approach.
arXiv Detail & Related papers (2020-11-01T18:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.