HYperbolic Self-Paced Learning for Self-Supervised Skeleton-based Action
Representations
- URL: http://arxiv.org/abs/2303.06242v1
- Date: Fri, 10 Mar 2023 23:22:41 GMT
- Title: HYperbolic Self-Paced Learning for Self-Supervised Skeleton-based Action
Representations
- Authors: Luca Franco, Paolo Mandica, Bharti Munjal, Fabio Galasso
- Abstract summary: We propose a novel HYperbolic Self-Paced model (HYSP) for learning skeleton-based action representations.
HYSP adopts self-supervision: it uses data augmentations to generate two views of the same sample, and it learns by matching one (named online) to the other (the target)
- Score: 4.870652964208548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-paced learning has been beneficial for tasks where some initial
knowledge is available, such as weakly supervised learning and domain
adaptation, to select and order the training sample sequence, from easy to
complex. However its applicability remains unexplored in unsupervised learning,
whereby the knowledge of the task matures during training. We propose a novel
HYperbolic Self-Paced model (HYSP) for learning skeleton-based action
representations. HYSP adopts self-supervision: it uses data augmentations to
generate two views of the same sample, and it learns by matching one (named
online) to the other (the target). We propose to use hyperbolic uncertainty to
determine the algorithmic learning pace, under the assumption that less
uncertain samples should be more strongly driving the training, with a larger
weight and pace. Hyperbolic uncertainty is a by-product of the adopted
hyperbolic neural networks, it matures during training and it comes with no
extra cost, compared to the established Euclidean SSL framework counterparts.
When tested on three established skeleton-based action recognition datasets,
HYSP outperforms the state-of-the-art on PKU-MMD I, as well as on 2 out of 3
downstream tasks on NTU-60 and NTU-120. Additionally, HYSP only uses positive
pairs and bypasses therefore the complex and computationally-demanding mining
procedures required for the negatives in contrastive techniques. Code is
available at https://github.com/paolomandica/HYSP.
Related papers
- Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Skeleton2vec: A Self-supervised Learning Framework with Contextualized
Target Representations for Skeleton Sequence [56.092059713922744]
We show that using high-level contextualized features as prediction targets can achieve superior performance.
Specifically, we propose Skeleton2vec, a simple and efficient self-supervised 3D action representation learning framework.
Our proposed Skeleton2vec outperforms previous methods and achieves state-of-the-art results.
arXiv Detail & Related papers (2024-01-01T12:08:35Z) - Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - HaLP: Hallucinating Latent Positives for Skeleton-based Self-Supervised
Learning of Actions [69.14257241250046]
We propose a new contrastive learning approach to train models for skeleton-based action recognition without labels.
Our key contribution is a simple module, HaLP - to Hallucinate Latent Positives for contrastive learning.
We show via experiments that using these generated positives within a standard contrastive learning framework leads to consistent improvements.
arXiv Detail & Related papers (2023-04-01T21:09:43Z) - Temporal Feature Alignment in Contrastive Self-Supervised Learning for
Human Activity Recognition [2.2082422928825136]
Self-supervised learning is typically used to learn deep feature representations from unlabeled data.
We propose integrating a dynamic time warping algorithm in a latent space to force features to be aligned in a temporal dimension.
The proposed approach has a great potential in learning robust feature representations compared to the recent SSL baselines.
arXiv Detail & Related papers (2022-10-07T07:51:01Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z) - STDP enhances learning by backpropagation in a spiking neural network [0.0]
The proposed method improves the accuracy without additional labeling when a small amount of labeled data is used.
It is possible to implement the proposed learning method for event-driven systems.
arXiv Detail & Related papers (2021-02-21T06:55:02Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.