Adversarial Self-Supervised Learning for Semi-Supervised 3D Action
Recognition
- URL: http://arxiv.org/abs/2007.05934v1
- Date: Sun, 12 Jul 2020 08:01:06 GMT
- Title: Adversarial Self-Supervised Learning for Semi-Supervised 3D Action
Recognition
- Authors: Chenyang Si, Xuecheng Nie, Wei Wang, Liang Wang, Tieniu Tan, Jiashi
Feng
- Abstract summary: We present Adversarial Self-Supervised Learning (ASSL), a novel framework that tightly couples SSL and the semi-supervised scheme.
Specifically, we design an effective SSL scheme to improve the discrimination capability of learned representations for 3D action recognition.
- Score: 123.62183172631443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of semi-supervised 3D action recognition which has
been rarely explored before. Its major challenge lies in how to effectively
learn motion representations from unlabeled data. Self-supervised learning
(SSL) has been proved very effective at learning representations from unlabeled
data in the image domain. However, few effective self-supervised approaches
exist for 3D action recognition, and directly applying SSL for semi-supervised
learning suffers from misalignment of representations learned from SSL and
supervised learning tasks. To address these issues, we present Adversarial
Self-Supervised Learning (ASSL), a novel framework that tightly couples SSL and
the semi-supervised scheme via neighbor relation exploration and adversarial
learning. Specifically, we design an effective SSL scheme to improve the
discrimination capability of learned representations for 3D action recognition,
through exploring the data relations within a neighborhood. We further propose
an adversarial regularization to align the feature distributions of labeled and
unlabeled samples. To demonstrate effectiveness of the proposed ASSL in
semi-supervised 3D action recognition, we conduct extensive experiments on NTU
and N-UCLA datasets. The results confirm its advantageous performance over
state-of-the-art semi-supervised methods in the few label regime for 3D action
recognition.
Related papers
- DyConfidMatch: Dynamic Thresholding and Re-sampling for 3D Semi-supervised Learning [4.259908158892314]
Semi-supervised learning (SSL) leverages limited labeled and abundant unlabeled data but often faces challenges with data imbalance.
This study investigates class-level confidence as an indicator of learning status in 3D SSL, proposing a novel method that utilizes dynamic thresholding.
A re-sampling strategy is also introduced to mitigate bias towards well-represented classes, ensuring equitable class representation.
arXiv Detail & Related papers (2024-11-13T05:09:28Z) - How To Overcome Confirmation Bias in Semi-Supervised Image
Classification By Active Learning [2.1805442504863506]
We present three data challenges common in real-world applications: between-class imbalance, within-class imbalance, and between-class similarity.
We find that random sampling does not mitigate confirmation bias and, in some cases, leads to worse performance than supervised learning.
Our results provide insights into the potential of combining active and semi-supervised learning in the presence of common real-world challenges.
arXiv Detail & Related papers (2023-08-16T08:52:49Z) - Semantic Positive Pairs for Enhancing Visual Representation Learning of Instance Discrimination methods [4.680881326162484]
Self-supervised learning algorithms (SSL) based on instance discrimination have shown promising results.
We propose an approach to identify those images with similar semantic content and treat them as positive instances.
We run experiments on three benchmark datasets: ImageNet, STL-10 and CIFAR-10 with different instance discrimination SSL approaches.
arXiv Detail & Related papers (2023-06-28T11:47:08Z) - Hierarchical Supervision and Shuffle Data Augmentation for 3D
Semi-Supervised Object Detection [90.32180043449263]
State-of-the-art 3D object detectors are usually trained on large-scale datasets with high-quality 3D annotations.
A natural remedy is to adopt semi-supervised learning (SSL) by leveraging a limited amount of labeled samples and abundant unlabeled samples.
This paper introduces a novel approach of Hierarchical Supervision and Shuffle Data Augmentation (HSSDA), which is a simple yet effective teacher-student framework.
arXiv Detail & Related papers (2023-04-04T02:09:32Z) - Self-Supervised Visual Representation Learning via Residual Momentum [15.515169550346517]
Self-supervised learning (SSL) approaches have shown promising capabilities in learning the representation from unlabeled data.
momentum-based SSL frameworks suffer from a large gap in representation between the online encoder (student) and the momentum encoder (teacher)
This paper is the first to investigate and identify this invisible gap as a bottleneck that has been overlooked in the existing SSL frameworks.
We propose "residual momentum" to directly reduce this gap to encourage the student to learn the representation as close to that of the teacher as possible.
arXiv Detail & Related papers (2022-11-17T19:54:02Z) - Class-Level Confidence Based 3D Semi-Supervised Learning [18.95161296147023]
We show that unlabeled data class-level confidence can represent the learning status in the 3D imbalanced dataset.
Our method significantly outperforms state-of-the-art counterparts for both 3D SSL classification and detection tasks.
arXiv Detail & Related papers (2022-10-18T20:13:28Z) - Decoupled Adversarial Contrastive Learning for Self-supervised
Adversarial Robustness [69.39073806630583]
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
We propose a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL)
arXiv Detail & Related papers (2022-07-22T06:30:44Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.