Review helps learn better: Temporal Supervised Knowledge Distillation
- URL: http://arxiv.org/abs/2307.00811v2
- Date: Mon, 21 Aug 2023 08:45:07 GMT
- Title: Review helps learn better: Temporal Supervised Knowledge Distillation
- Authors: Dongwei Wang, Zhi Han, Yanmei Wang, Xiai Chen, Baichen Liu and Yandong
Tang
- Abstract summary: We find that during the network training, the evolution of feature map follows temporal sequence property.
Inspired by this observation, we propose Temporal Supervised Knowledge Distillation Review (TSKD)
- Score: 9.220654594406508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reviewing plays an important role when learning knowledge. The knowledge
acquisition at a certain time point may be strongly inspired with the help of
previous experience. Thus the knowledge growing procedure should show strong
relationship along the temporal dimension. In our research, we find that during
the network training, the evolution of feature map follows temporal sequence
property. A proper temporal supervision may further improve the network
training performance. Inspired by this observation, we propose Temporal
Supervised Knowledge Distillation (TSKD). Specifically, we extract the
spatiotemporal features in the different training phases of student by
convolutional Long Short-term memory network (Conv-LSTM). Then, we train the
student net through a dynamic target, rather than static teacher network
features. This process realizes the refinement of old knowledge in student
network, and utilizes it to assist current learning. Extensive experiments
verify the effectiveness and advantages of our method over existing knowledge
distillation methods, including various network architectures and different
tasks (image classification and object detection) .
Related papers
- Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation [11.754014876977422]
This paper introduces a novel perspective emphasizing student-oriented and refining the teacher's knowledge to better align with the student's needs.
We present the Student-Oriented Knowledge Distillation (SoKD), which incorporates a learnable feature augmentation strategy during training.
We also deploy the Distinctive Area Detection Module (DAM) to identify areas of mutual interest between the teacher and student.
arXiv Detail & Related papers (2024-09-27T14:34:08Z) - On effects of Knowledge Distillation on Transfer Learning [0.0]
We propose a machine learning architecture we call TL+KD that combines knowledge distillation with transfer learning.
We show that using guidance and knowledge from a larger teacher network during fine-tuning, we can improve the student network to achieve better validation performances like accuracy.
arXiv Detail & Related papers (2022-10-18T08:11:52Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Theoretical Understanding of the Information Flow on Continual Learning
Performance [2.741266294612776]
Continual learning (CL) is a setting in which an agent has to learn from an incoming stream of data sequentially.
We study CL performance's relationship with information flow in the network to answer the question "How can knowledge of information flow between layers be used to alleviate CF?"
Our analysis provides novel insights of information adaptation within the layers during the incremental task learning process.
arXiv Detail & Related papers (2022-04-26T00:35:58Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Distilling Knowledge via Knowledge Review [69.15050871776552]
We study the factor of connection path cross levels between teacher and student networks, and reveal its great importance.
For the first time in knowledge distillation, cross-stage connection paths are proposed.
Our finally designed nested and compact framework requires negligible overhead, and outperforms other methods on a variety of tasks.
arXiv Detail & Related papers (2021-04-19T04:36:24Z) - Student Network Learning via Evolutionary Knowledge Distillation [22.030934154498205]
We propose an evolutionary knowledge distillation approach to improve the transfer effectiveness of teacher knowledge.
Instead of a fixed pre-trained teacher, an evolutionary teacher is learned online and consistently transfers intermediate knowledge to supervise student network learning on-the-fly.
In this way, the student can simultaneously obtain rich internal knowledge and capture its growth process, leading to effective student network learning.
arXiv Detail & Related papers (2021-03-23T02:07:15Z) - Collaborative Distillation in the Parameter and Spectrum Domains for
Video Action Recognition [79.60708268515293]
This paper explores how to train small and efficient networks for action recognition.
We propose two distillation strategies in the frequency domain, namely the feature spectrum and parameter distribution distillations respectively.
Our method can achieve higher performance than state-of-the-art methods with the same backbone.
arXiv Detail & Related papers (2020-09-15T07:29:57Z) - Point Adversarial Self Mining: A Simple Method for Facial Expression
Recognition [79.75964372862279]
We propose Point Adversarial Self Mining (PASM) to improve the recognition accuracy in facial expression recognition.
PASM uses a point adversarial attack method and a trained teacher network to locate the most informative position related to the target task.
The adaptive learning materials generation and teacher/student update can be conducted more than one time, improving the network capability iteratively.
arXiv Detail & Related papers (2020-08-26T06:39:24Z) - Interactive Knowledge Distillation [79.12866404907506]
We propose an InterActive Knowledge Distillation scheme to leverage the interactive teaching strategy for efficient knowledge distillation.
In the distillation process, the interaction between teacher and student networks is implemented by a swapping-in operation.
Experiments with typical settings of teacher-student networks demonstrate that the student networks trained by our IAKD achieve better performance than those trained by conventional knowledge distillation methods.
arXiv Detail & Related papers (2020-07-03T03:22:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.