Distilling Vision-Language Pre-training to Collaborate with
Weakly-Supervised Temporal Action Localization
- URL: http://arxiv.org/abs/2212.09335v1
- Date: Mon, 19 Dec 2022 10:02:50 GMT
- Title: Distilling Vision-Language Pre-training to Collaborate with
Weakly-Supervised Temporal Action Localization
- Authors: Chen Ju, Kunhao Zheng, Jinxiang Liu, Peisen Zhao, Ya Zhang, Jianlong
Chang, Yanfeng Wang, Qi Tian
- Abstract summary: Weakly-supervised temporal action localization learns to detect and classify action instances with only category labels.
Most methods widely adopt the off-the-shelf Classification-Based Pre-training (CBP) to generate video features for action localization.
- Score: 77.19173283023012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Weakly-supervised temporal action localization (WTAL) learns to detect and
classify action instances with only category labels. Most methods widely adopt
the off-the-shelf Classification-Based Pre-training (CBP) to generate video
features for action localization. However, the different optimization
objectives between classification and localization, make temporally localized
results suffer from the serious incomplete issue. To tackle this issue without
additional annotations, this paper considers to distill free action knowledge
from Vision-Language Pre-training (VLP), since we surprisingly observe that the
localization results of vanilla VLP have an over-complete issue, which is just
complementary to the CBP results. To fuse such complementarity, we propose a
novel distillation-collaboration framework with two branches acting as CBP and
VLP respectively. The framework is optimized through a dual-branch alternate
training strategy. Specifically, during the B step, we distill the confident
background pseudo-labels from the CBP branch; while during the F step, the
confident foreground pseudo-labels are distilled from the VLP branch. And as a
result, the dual-branch complementarity is effectively fused to promote a
strong alliance. Extensive experiments and ablation studies on THUMOS14 and
ActivityNet1.2 reveal that our method significantly outperforms
state-of-the-art methods.
Related papers
- Proposal-based Temporal Action Localization with Point-level Supervision [29.98225940694062]
Point-level supervised temporal action localization (PTAL) aims at recognizing and localizing actions in untrimmed videos.
We propose a novel method that localizes actions by generating and evaluating action proposals of flexible duration.
Experiments show that our proposed method achieves competitive or superior performance to the state-of-the-art methods.
arXiv Detail & Related papers (2023-10-09T08:27:05Z) - Active Learning with Effective Scoring Functions for Semi-Supervised
Temporal Action Localization [15.031156121516211]
This paper focuses on a rarely investigated yet practical task named semi-supervised TAL.
We propose an effective active learning method, named AL-STAL.
Experiment results show that AL-STAL outperforms the existing competitors and achieves satisfying performance compared with fully-supervised learning.
arXiv Detail & Related papers (2022-08-31T13:39:38Z) - Localization Distillation for Object Detection [134.12664548771534]
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the classification logits.
We present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student.
We show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years.
arXiv Detail & Related papers (2022-04-12T17:14:34Z) - Fine-grained Temporal Contrastive Learning for Weakly-supervised
Temporal Action Localization [87.47977407022492]
This paper argues that learning by contextually comparing sequence-to-sequence distinctions offers an essential inductive bias in weakly-supervised action localization.
Under a differentiable dynamic programming formulation, two complementary contrastive objectives are designed, including Fine-grained Sequence Distance (FSD) contrasting and Longest Common Subsequence (LCS) contrasting.
Our method achieves state-of-the-art performance on two popular benchmarks.
arXiv Detail & Related papers (2022-03-31T05:13:50Z) - Adaptive Mutual Supervision for Weakly-Supervised Temporal Action
Localization [92.96802448718388]
We introduce an adaptive mutual supervision framework (AMS) for temporal action localization.
The proposed AMS method significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T08:31:10Z) - Point-Level Temporal Action Localization: Bridging Fully-supervised
Proposals to Weakly-supervised Losses [84.2964408497058]
Point-level temporal action localization (PTAL) aims to localize actions in untrimmed videos with only one timestamp annotation for each action instance.
Existing methods adopt the frame-level prediction paradigm to learn from the sparse single-frame labels.
This paper attempts to explore the proposal-based prediction paradigm for point-level annotations.
arXiv Detail & Related papers (2020-12-15T12:11:48Z) - Two-phase Pseudo Label Densification for Self-training based Domain
Adaptation [93.03265290594278]
We propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD.
In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images.
In the second phase, we perform a confidence-based easy-hard classification.
To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss.
arXiv Detail & Related papers (2020-12-09T02:35:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.