Self-supervised learning using consistency regularization of
spatio-temporal data augmentation for action recognition
- URL: http://arxiv.org/abs/2008.02086v1
- Date: Wed, 5 Aug 2020 12:41:59 GMT
- Title: Self-supervised learning using consistency regularization of
spatio-temporal data augmentation for action recognition
- Authors: Jinpeng Wang, Yiqi Lin, Andy J.Ma
- Abstract summary: We present a novel way to obtain the surrogate supervision signal based on high-level feature maps under consistency regularization.
Our method achieves substantial improvements compared with state-of-the-art self-supervised learning methods for action recognition.
- Score: 15.701647552427708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning has shown great potentials in improving the deep
learning model in an unsupervised manner by constructing surrogate supervision
signals directly from the unlabeled data. Different from existing works, we
present a novel way to obtain the surrogate supervision signal based on
high-level feature maps under consistency regularization. In this paper, we
propose a Spatio-Temporal Consistency Regularization between different output
features generated from a siamese network including a clean path fed with
original video and a noise path fed with the corresponding augmented video.
Based on the Spatio-Temporal characteristics of video, we develop two
video-based data augmentation methods, i.e., Spatio-Temporal Transformation and
Intra-Video Mixup. Consistency of the former one is proposed to model
transformation consistency of features, while the latter one aims at retaining
spatial invariance to extract action-related features. Extensive experiments
demonstrate that our method achieves substantial improvements compared with
state-of-the-art self-supervised learning methods for action recognition. When
using our method as an additional regularization term and combine with current
surrogate supervision signals, we achieve 22% relative improvement over the
previous state-of-the-art on HMDB51 and 7% on UCF101.
Related papers
- MOOSS: Mask-Enhanced Temporal Contrastive Learning for Smooth State Evolution in Visual Reinforcement Learning [8.61492882526007]
In visual Reinforcement Learning (RL), learning from pixel-based observations poses significant challenges on sample efficiency.
We introduce MOOSS, a novel framework that leverages a temporal contrastive objective with the help of graph-based spatial-temporal masking.
Our evaluation on multiple continuous and discrete control benchmarks shows that MOOSS outperforms previous state-of-the-art visual RL methods in terms of sample efficiency.
arXiv Detail & Related papers (2024-09-02T18:57:53Z) - Patch Spatio-Temporal Relation Prediction for Video Anomaly Detection [19.643936110623653]
Video Anomaly Detection (VAD) aims to identify abnormalities within a specific context and timeframe.
Recent deep learning-based VAD models have shown promising results by generating high-resolution frames.
We propose a self-supervised learning approach for VAD through an inter-patch relationship prediction task.
arXiv Detail & Related papers (2024-03-28T03:07:16Z) - Deeply-Coupled Convolution-Transformer with Spatial-temporal
Complementary Learning for Video-based Person Re-identification [91.56939957189505]
We propose a novel spatial-temporal complementary learning framework named Deeply-Coupled Convolution-Transformer (DCCT) for high-performance video-based person Re-ID.
Our framework could attain better performances than most state-of-the-art methods.
arXiv Detail & Related papers (2023-04-27T12:16:44Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - Transfer of Representations to Video Label Propagation: Implementation
Factors Matter [31.030799003595522]
We study the impact of important implementation factors in feature extraction and label propagation.
We show that augmenting video-based correspondence cues with still-image-based ones can further improve performance.
We hope that this study will help to improve evaluation practices and better inform future research directions in temporal correspondence.
arXiv Detail & Related papers (2022-03-10T18:58:22Z) - Weakly Supervised Video Salient Object Detection [79.51227350937721]
We present the first weakly supervised video salient object detection model based on relabeled "fixation guided scribble annotations"
An "Appearance-motion fusion module" and bidirectional ConvLSTM based framework are proposed to achieve effective multi-modal learning and long-term temporal context modeling.
arXiv Detail & Related papers (2021-04-06T09:48:38Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Hierarchically Decoupled Spatial-Temporal Contrast for Self-supervised
Video Representation Learning [6.523119805288132]
We present a novel technique for self-supervised video representation learning by: (a) decoupling the learning objective into two contrastive subtasks respectively emphasizing spatial and temporal features, and (b) performing it hierarchically to encourage multi-scale understanding.
arXiv Detail & Related papers (2020-11-23T08:05:39Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z) - Self-supervised Video Object Segmentation [76.83567326586162]
The objective of this paper is self-supervised representation learning, with the goal of solving semi-supervised video object segmentation (a.k.a. dense tracking)
We make the following contributions: (i) we propose to improve the existing self-supervised approach, with a simple, yet more effective memory mechanism for long-term correspondence matching; (ii) by augmenting the self-supervised approach with an online adaptation module, our method successfully alleviates tracker drifts caused by spatial-temporal discontinuity; (iv) we demonstrate state-of-the-art results among the self-supervised approaches on DAVIS-2017 and YouTube
arXiv Detail & Related papers (2020-06-22T17:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.