Privileged Knowledge Distillation for Online Action Detection
- URL: http://arxiv.org/abs/2011.09158v2
- Date: Thu, 3 Dec 2020 12:52:54 GMT
- Title: Privileged Knowledge Distillation for Online Action Detection
- Authors: Peisen Zhao, Lingxi Xie, Ya Zhang, Yanfeng Wang, Qi Tian
- Abstract summary: Online Action Detection (OAD) in videos is proposed as a per-frame labeling task to address the real-time prediction tasks.
This paper presents a novel learning-with-privileged based framework for online action detection where the future frames only observable at the training stages are considered as a form of privileged information.
- Score: 114.5213840651675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online Action Detection (OAD) in videos is proposed as a per-frame labeling
task to address the real-time prediction tasks that can only obtain the
previous and current video frames. This paper presents a novel
learning-with-privileged based framework for online action detection where the
future frames only observable at the training stages are considered as a form
of privileged information. Knowledge distillation is employed to transfer the
privileged information from the offline teacher to the online student. We note
that this setting is different from conventional KD because the difference
between the teacher and student models mostly lies in input data rather than
the network architecture. We propose Privileged Knowledge Distillation (PKD)
which (i) schedules a curriculum learning procedure and (ii) inserts auxiliary
nodes to the student model, both for shrinking the information gap and
improving learning performance. Compared to other OAD methods that explicitly
predict future frames, our approach avoids learning unpredictable unnecessary
yet inconsistent visual contents and achieves state-of-the-art accuracy on two
popular OAD benchmarks, TVSeries and THUMOS14.
Related papers
- PromptKD: Unsupervised Prompt Distillation for Vision-Language Models [40.858721356497085]
We introduce an unsupervised domain prompt distillation framework, which aims to transfer the knowledge of a larger teacher model to a lightweight target model.
Our framework consists of two distinct stages. In the initial stage, we pre-train a large CLIP teacher model using domain (few-shot) labels.
In the subsequent stage, the stored class vectors are shared across teacher and student image encoders for calculating the predicted logits.
arXiv Detail & Related papers (2024-03-05T08:53:30Z) - Learning to Retain while Acquiring: Combating Distribution-Shift in
Adversarial Data-Free Knowledge Distillation [31.294947552032088]
Data-free Knowledge Distillation (DFKD) has gained popularity recently, with the fundamental idea of carrying out knowledge transfer from a Teacher to a Student neural network in the absence of training data.
We propose a meta-learning inspired framework by treating the task of Knowledge-Acquisition (learning from newly generated samples) and Knowledge-Retention (retaining knowledge on previously met samples) as meta-train and meta-test.
arXiv Detail & Related papers (2023-02-28T03:50:56Z) - PIVOT: Prompting for Video Continual Learning [50.80141083993668]
We introduce PIVOT, a novel method that leverages extensive knowledge in pre-trained models from the image domain.
Our experiments show that PIVOT improves state-of-the-art methods by a significant 27% on the 20-task ActivityNet setup.
arXiv Detail & Related papers (2022-12-09T13:22:27Z) - Exploring Inconsistent Knowledge Distillation for Object Detection with
Data Augmentation [66.25738680429463]
Knowledge Distillation (KD) for object detection aims to train a compact detector by transferring knowledge from a teacher model.
We propose inconsistent knowledge distillation (IKD) which aims to distill knowledge inherent in the teacher model's counter-intuitive perceptions.
Our method outperforms state-of-the-art KD baselines on one-stage, two-stage and anchor-free object detectors.
arXiv Detail & Related papers (2022-09-20T16:36:28Z) - An Empirical Study of End-to-End Temporal Action Detection [82.64373812690127]
Temporal action detection (TAD) is an important yet challenging task in video understanding.
Rather than end-to-end learning, most existing methods adopt a head-only learning paradigm.
We validate the advantage of end-to-end learning over head-only learning and observe up to 11% performance improvement.
arXiv Detail & Related papers (2022-04-06T16:46:30Z) - Class-Incremental Learning for Action Recognition in Videos [44.923719189467164]
We tackle catastrophic forgetting problem in the context of class-incremental learning for video recognition.
Our framework addresses this challenging task by introducing time-channel importance maps and exploiting the importance maps for learning the representations of incoming examples.
We evaluate the proposed approach on brand-new splits of class-incremental action recognition benchmarks constructed upon the UCF101, HMDB51, and Something-Something V2 datasets.
arXiv Detail & Related papers (2022-03-25T12:15:49Z) - Bridging the gap between Human Action Recognition and Online Action
Detection [0.456877715768796]
Action recognition, early prediction, and online action detection are complementary disciplines that are often studied independently.
We address the task-specific feature extraction with a teacher-student framework between the aforementioned disciplines.
Our network embeds online early prediction and online temporal segment proposalworks in parallel.
arXiv Detail & Related papers (2021-01-21T21:01:46Z) - ProxylessKD: Direct Knowledge Distillation with Inherited Classifier for
Face Recognition [84.49978494275382]
Knowledge Distillation (KD) refers to transferring knowledge from a large model to a smaller one.
In this work, we focus on its application in face recognition.
We propose a novel method named ProxylessKD that directly optimize face recognition accuracy.
arXiv Detail & Related papers (2020-10-31T13:14:34Z) - Memory-augmented Dense Predictive Coding for Video Representation
Learning [103.69904379356413]
We propose a new architecture and learning framework Memory-augmented Predictive Coding (MemDPC) for the task.
We investigate visual-only self-supervised video representation learning from RGB frames, or from unsupervised optical flow, or both.
In all cases, we demonstrate state-of-the-art or comparable performance over other approaches with orders of magnitude fewer training data.
arXiv Detail & Related papers (2020-08-03T17:57:01Z) - Learning Spatiotemporal Features via Video and Text Pair Discrimination [30.64670449131973]
Cross-modal pair (CPD) framework captures correlation between video and its associated text.
We train our CPD models on both standard video dataset (Kinetics-210k) and uncurated web video dataset (-300k) to demonstrate its effectiveness.
arXiv Detail & Related papers (2020-01-16T08:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.