Hierarchically Self-Supervised Transformer for Human Skeleton
Representation Learning
- URL: http://arxiv.org/abs/2207.09644v3
- Date: Mon, 27 Mar 2023 10:35:11 GMT
- Title: Hierarchically Self-Supervised Transformer for Human Skeleton
Representation Learning
- Authors: Yuxiao Chen, Long Zhao, Jianbo Yuan, Yu Tian, Zhaoyang Xia, Shijie
Geng, Ligong Han, and Dimitris N. Metaxas
- Abstract summary: We propose a self-supervised hierarchical pre-training scheme incorporated into a hierarchical Transformer-based skeleton sequence encoder (Hi-TRS)
Under both supervised and semi-supervised evaluation protocols, our method achieves the state-of-the-art performance.
- Score: 45.13060970066485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the success of fully-supervised human skeleton sequence modeling,
utilizing self-supervised pre-training for skeleton sequence representation
learning has been an active field because acquiring task-specific skeleton
annotations at large scales is difficult. Recent studies focus on learning
video-level temporal and discriminative information using contrastive learning,
but overlook the hierarchical spatial-temporal nature of human skeletons.
Different from such superficial supervision at the video level, we propose a
self-supervised hierarchical pre-training scheme incorporated into a
hierarchical Transformer-based skeleton sequence encoder (Hi-TRS), to
explicitly capture spatial, short-term, and long-term temporal dependencies at
frame, clip, and video levels, respectively. To evaluate the proposed
self-supervised pre-training scheme with Hi-TRS, we conduct extensive
experiments covering three skeleton-based downstream tasks including action
recognition, action detection, and motion prediction. Under both supervised and
semi-supervised evaluation protocols, our method achieves the state-of-the-art
performance. Additionally, we demonstrate that the prior knowledge learned by
our model in the pre-training stage has strong transfer capability for
different downstream tasks.
Related papers
- Self-Supervised Skeleton-Based Action Representation Learning: A Benchmark and Beyond [19.074841631219233]
Self-supervised learning (SSL) has been proven effective for skeleton-based action understanding.
In this paper, we conduct a comprehensive survey on self-supervised skeleton-based action representation learning.
arXiv Detail & Related papers (2024-06-05T06:21:54Z) - Skeleton2vec: A Self-supervised Learning Framework with Contextualized
Target Representations for Skeleton Sequence [56.092059713922744]
We show that using high-level contextualized features as prediction targets can achieve superior performance.
Specifically, we propose Skeleton2vec, a simple and efficient self-supervised 3D action representation learning framework.
Our proposed Skeleton2vec outperforms previous methods and achieves state-of-the-art results.
arXiv Detail & Related papers (2024-01-01T12:08:35Z) - Self-Supervised Representation Learning from Temporal Ordering of
Automated Driving Sequences [49.91741677556553]
We propose TempO, a temporal ordering pretext task for pre-training region-level feature representations for perception tasks.
We embed each frame by an unordered set of proposal feature vectors, a representation that is natural for object detection or tracking systems.
Extensive evaluations on the BDD100K, nuImages, and MOT17 datasets show that our TempO pre-training approach outperforms single-frame self-supervised learning methods.
arXiv Detail & Related papers (2023-02-17T18:18:27Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - UNIK: A Unified Framework for Real-world Skeleton-based Action
Recognition [11.81043814295441]
We introduce UNIK, a novel skeleton-based action recognition method that is able to generalize across datasets.
To study the cross-domain generalizability of action recognition in real-world videos, we re-evaluate state-of-the-art approaches as well as the proposed UNIK.
Results show that the proposed UNIK, with pre-training on Posetics, generalizes well and outperforms state-of-the-art when transferred onto four target action classification datasets.
arXiv Detail & Related papers (2021-07-19T02:00:28Z) - Hierarchically Decoupled Spatial-Temporal Contrast for Self-supervised
Video Representation Learning [6.523119805288132]
We present a novel technique for self-supervised video representation learning by: (a) decoupling the learning objective into two contrastive subtasks respectively emphasizing spatial and temporal features, and (b) performing it hierarchically to encourage multi-scale understanding.
arXiv Detail & Related papers (2020-11-23T08:05:39Z) - Learning to Abstract and Predict Human Actions [60.85905430007731]
We model the hierarchical structure of human activities in videos and demonstrate the power of such structure in action prediction.
We propose Hierarchical-Refresher-Anticipator, a multi-level neural machine that can learn the structure of human activities by observing a partial hierarchy of events and roll-out such structure into a future prediction in multiple levels of abstraction.
arXiv Detail & Related papers (2020-08-20T23:57:58Z) - SeCo: Exploring Sequence Supervision for Unsupervised Representation
Learning [114.58986229852489]
In this paper, we explore the basic and generic supervision in the sequence from spatial, sequential and temporal perspectives.
We derive a particular form named Contrastive Learning (SeCo)
SeCo shows superior results under the linear protocol on action recognition, untrimmed activity recognition and object tracking.
arXiv Detail & Related papers (2020-08-03T15:51:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.