Learn from Balance: Rectifying Knowledge Transfer for Long-Tailed Scenarios
- URL: http://arxiv.org/abs/2409.07694v1
- Date: Thu, 12 Sep 2024 01:58:06 GMT
- Title: Learn from Balance: Rectifying Knowledge Transfer for Long-Tailed Scenarios
- Authors: Xinlei Huang, Jialiang Tang, Xubin Zheng, Jinjia Zhou, Wenxin Yu, Ning Jiang,
- Abstract summary: Knowledge Distillation (KD) transfers knowledge from a large pre-trained teacher network to a compact and efficient student network.
We propose a novel framework called Knowledge Rectification Distillation (KRDistill) to address the imbalanced knowledge inherited in the teacher network.
- Score: 8.804625474114948
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge Distillation (KD) transfers knowledge from a large pre-trained teacher network to a compact and efficient student network, making it suitable for deployment on resource-limited media terminals. However, traditional KD methods require balanced data to ensure robust training, which is often unavailable in practical applications. In such scenarios, a few head categories occupy a substantial proportion of examples. This imbalance biases the trained teacher network towards the head categories, resulting in severe performance degradation on the less represented tail categories for both the teacher and student networks. In this paper, we propose a novel framework called Knowledge Rectification Distillation (KRDistill) to address the imbalanced knowledge inherited in the teacher network through the incorporation of the balanced category priors. Furthermore, we rectify the biased predictions produced by the teacher network, particularly focusing on the tail categories. Consequently, the teacher network can provide balanced and accurate knowledge to train a reliable student network. Intensive experiments conducted on various long-tailed datasets demonstrate that our KRDistill can effectively train reliable student networks in realistic scenarios of data imbalance.
Related papers
- Adaptive Teaching with Shared Classifier for Knowledge Distillation [6.03477652126575]
Knowledge distillation (KD) is a technique used to transfer knowledge from a teacher network to a student network.
We propose adaptive teaching with a shared classifier (ATSC)
Our approach achieves state-of-the-art results on the CIFAR-100 and ImageNet datasets in both single-teacher and multiteacher scenarios.
arXiv Detail & Related papers (2024-06-12T08:51:08Z) - Distribution Shift Matters for Knowledge Distillation with Webly
Collected Images [91.66661969598755]
We propose a novel method dubbed Knowledge Distillation between Different Distributions" (KD$3$)
We first dynamically select useful training instances from the webly collected data according to the combined predictions of teacher network and student network.
We also build a new contrastive learning block called MixDistribution to generate perturbed data with a new distribution for instance alignment.
arXiv Detail & Related papers (2023-07-21T10:08:58Z) - BD-KD: Balancing the Divergences for Online Knowledge Distillation [12.27903419909491]
We propose BD-KD: Balancing of Divergences for online Knowledge Distillation.
We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network.
We demonstrate that, by performing this balancing design at the level of the student distillation loss, we improve upon both performance accuracy and calibration of the compact student network.
arXiv Detail & Related papers (2022-12-25T22:27:32Z) - Deep Reinforcement Learning for Multi-class Imbalanced Training [64.9100301614621]
We introduce an imbalanced classification framework, based on reinforcement learning, for training extremely imbalanced data sets.
We formulate a custom reward function and episode-training procedure, specifically with the added capability of handling multi-class imbalanced training.
Using real-world clinical case studies, we demonstrate that our proposed framework outperforms current state-of-the-art imbalanced learning methods.
arXiv Detail & Related papers (2022-05-24T13:39:59Z) - Data-Free Knowledge Distillation with Soft Targeted Transfer Set
Synthesis [8.87104231451079]
Knowledge distillation (KD) has proved to be an effective approach for deep neural network compression.
In traditional KD, the transferred knowledge is usually obtained by feeding training samples to the teacher network.
The original training dataset is not always available due to storage costs or privacy issues.
We propose a novel data-free KD approach by modeling the intermediate feature space of the teacher.
arXiv Detail & Related papers (2021-04-10T22:42:14Z) - Knowledge Distillation in Wide Neural Networks: Risk Bound, Data
Efficiency and Imperfect Teacher [40.74624021934218]
Knowledge distillation is a strategy of training a student network with guide of the soft output from a teacher network.
Recent finding on neural tangent kernel enables us to approximate a wide neural network with a linear model of the network's random features.
arXiv Detail & Related papers (2020-10-20T07:33:21Z) - Point Adversarial Self Mining: A Simple Method for Facial Expression
Recognition [79.75964372862279]
We propose Point Adversarial Self Mining (PASM) to improve the recognition accuracy in facial expression recognition.
PASM uses a point adversarial attack method and a trained teacher network to locate the most informative position related to the target task.
The adaptive learning materials generation and teacher/student update can be conducted more than one time, improving the network capability iteratively.
arXiv Detail & Related papers (2020-08-26T06:39:24Z) - Adversarial Training Reduces Information and Improves Transferability [81.59364510580738]
Recent results show that features of adversarially trained networks for classification, in addition to being robust, enable desirable properties such as invertibility.
We show that the Adversarial Training can improve linear transferability to new tasks, from which arises a new trade-off between transferability of representations and accuracy on the source task.
arXiv Detail & Related papers (2020-07-22T08:30:16Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Student-Teacher Curriculum Learning via Reinforcement Learning:
Predicting Hospital Inpatient Admission Location [4.359338565775979]
In this work we propose a student-teacher network via reinforcement learning to deal with this specific problem.
A representation of the weights of the student network is treated as the state and is fed as an input to the teacher network.
The teacher network's action is to select the most appropriate batch of data to train the student network on from a training set sorted according to entropy.
arXiv Detail & Related papers (2020-07-01T15:00:43Z) - Semantic Drift Compensation for Class-Incremental Learning [48.749630494026086]
Class-incremental learning of deep networks sequentially increases the number of classes to be classified.
We propose a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars.
arXiv Detail & Related papers (2020-04-01T13:31:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.