HaLP: Hallucinating Latent Positives for Skeleton-based Self-Supervised
Learning of Actions
- URL: http://arxiv.org/abs/2304.00387v1
- Date: Sat, 1 Apr 2023 21:09:43 GMT
- Title: HaLP: Hallucinating Latent Positives for Skeleton-based Self-Supervised
Learning of Actions
- Authors: Anshul Shah, Aniket Roy, Ketul Shah, Shlok Kumar Mishra, David Jacobs,
Anoop Cherian, Rama Chellappa
- Abstract summary: We propose a new contrastive learning approach to train models for skeleton-based action recognition without labels.
Our key contribution is a simple module, HaLP - to Hallucinate Latent Positives for contrastive learning.
We show via experiments that using these generated positives within a standard contrastive learning framework leads to consistent improvements.
- Score: 69.14257241250046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Supervised learning of skeleton sequence encoders for action recognition has
received significant attention in recent times. However, learning such encoders
without labels continues to be a challenging problem. While prior works have
shown promising results by applying contrastive learning to pose sequences, the
quality of the learned representations is often observed to be closely tied to
data augmentations that are used to craft the positives. However, augmenting
pose sequences is a difficult task as the geometric constraints among the
skeleton joints need to be enforced to make the augmentations realistic for
that action. In this work, we propose a new contrastive learning approach to
train models for skeleton-based action recognition without labels. Our key
contribution is a simple module, HaLP - to Hallucinate Latent Positives for
contrastive learning. Specifically, HaLP explores the latent space of poses in
suitable directions to generate new positives. To this end, we present a novel
optimization formulation to solve for the synthetic positives with an explicit
control on their hardness. We propose approximations to the objective, making
them solvable in closed form with minimal overhead. We show via experiments
that using these generated positives within a standard contrastive learning
framework leads to consistent improvements across benchmarks such as NTU-60,
NTU-120, and PKU-II on tasks like linear evaluation, transfer learning, and kNN
evaluation. Our code will be made available at
https://github.com/anshulbshah/HaLP.
Related papers
- Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - Continual Contrastive Spoken Language Understanding [33.09005399967931]
COCONUT is a class-incremental learning (CIL) method that relies on the combination of experience replay and contrastive learning.
We show that COCONUT can be combined with methods that operate on the decoder side of the model, resulting in further metrics improvements.
arXiv Detail & Related papers (2023-10-04T10:09:12Z) - HYperbolic Self-Paced Learning for Self-Supervised Skeleton-based Action
Representations [4.870652964208548]
We propose a novel HYperbolic Self-Paced model (HYSP) for learning skeleton-based action representations.
HYSP adopts self-supervision: it uses data augmentations to generate two views of the same sample, and it learns by matching one (named online) to the other (the target)
arXiv Detail & Related papers (2023-03-10T23:22:41Z) - Understanding and Mitigating Overfitting in Prompt Tuning for
Vision-Language Models [108.13378788663196]
We propose Subspace Prompt Tuning (SubPT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process.
We equip CoOp with Novel Learner Feature (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set.
arXiv Detail & Related papers (2022-11-04T02:06:22Z) - Improving Contrastive Learning with Model Augmentation [123.05700988581806]
The sequential recommendation aims at predicting the next items in user behaviors, which can be solved by characterizing item relationships in sequences.
Due to the data sparsity and noise issues in sequences, a new self-supervised learning (SSL) paradigm is proposed to improve the performance.
arXiv Detail & Related papers (2022-03-25T06:12:58Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Adversarial Training with Contrastive Learning in NLP [0.0]
We propose adversarial training with contrastive learning (ATCL) to adversarially train a language processing task.
The core idea is to make linear perturbations in the embedding space of the input via fast gradient methods (FGM) and train the model to keep the original and perturbed representations close via contrastive learning.
The results show not only an improvement in the quantitative (perplexity and BLEU) scores when compared to the baselines, but ATCL also achieves good qualitative results in the semantic level for both tasks.
arXiv Detail & Related papers (2021-09-19T07:23:45Z) - A Self-Supervised Gait Encoding Approach with Locality-Awareness for 3D
Skeleton Based Person Re-Identification [65.18004601366066]
Person re-identification (Re-ID) via gait features within 3D skeleton sequences is a newly-emerging topic with several advantages.
This paper proposes a self-supervised gait encoding approach that can leverage unlabeled skeleton data to learn gait representations for person Re-ID.
arXiv Detail & Related papers (2020-09-05T16:06:04Z) - Continual Learning with Node-Importance based Adaptive Group Sparse
Regularization [30.23319528662881]
We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL)
Our method selectively employs the two penalties when learning each node based its the importance, which is adaptively updated after learning each new task.
arXiv Detail & Related papers (2020-03-30T18:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.