Aggregative Self-Supervised Feature Learning from a Limited Sample
- URL: http://arxiv.org/abs/2012.07477v3
- Date: Thu, 25 Mar 2021 01:27:18 GMT
- Title: Aggregative Self-Supervised Feature Learning from a Limited Sample
- Authors: Jiuwen Zhu, Yuexiang Li, S. Kevin Zhou
- Abstract summary: We propose two strategies of aggregation in terms of complementarity of various forms to boost the robustness of self-supervised learned features.
Our experiments on 2D natural image and 3D medical image classification tasks under limited data scenarios confirm that the proposed aggregation strategies successfully boost the classification accuracy.
- Score: 12.555160911451688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) is an efficient approach that addresses the
issue of limited training data and annotation shortage. The key part in SSL is
its proxy task that defines the supervisory signals and drives the learning
toward effective feature representations. However, most SSL approaches usually
focus on a single proxy task, which greatly limits the expressive power of the
learned features and therefore deteriorates the network generalization
capacity. In this regard, we hereby propose two strategies of aggregation in
terms of complementarity of various forms to boost the robustness of
self-supervised learned features. We firstly propose a principled framework of
multi-task aggregative self-supervised learning from a limited sample to form a
unified representation, with an intent of exploiting feature complementarity
among different tasks. Then, in self-aggregative SSL, we propose to
self-complement an existing proxy task with an auxiliary loss function based on
a linear centered kernel alignment metric, which explicitly promotes the
exploring of where are uncovered by the features learned from a proxy task at
hand to further boost the modeling capability. Our extensive experiments on 2D
natural image and 3D medical image classification tasks under limited data and
annotation scenarios confirm that the proposed aggregation strategies
successfully boost the classification accuracy.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Masked Momentum Contrastive Learning for Zero-shot Semantic
Understanding [39.424931953675994]
Self-supervised pretraining (SSP) has emerged as a popular technique in machine learning, enabling the extraction of meaningful feature representations without labelled data.
This study endeavours to evaluate the effectiveness of pure self-supervised learning (SSL) techniques in computer vision tasks.
arXiv Detail & Related papers (2023-08-22T13:55:57Z) - Spatiotemporal Self-supervised Learning for Point Clouds in the Wild [65.56679416475943]
We introduce an SSL strategy that leverages positive pairs in both the spatial and temporal domain.
We demonstrate the benefits of our approach via extensive experiments performed by self-supervised training on two large-scale LiDAR datasets.
arXiv Detail & Related papers (2023-03-28T18:06:22Z) - FUSSL: Fuzzy Uncertain Self Supervised Learning [8.31483061185317]
Self supervised learning (SSL) has become a very successful technique to harness the power of unlabeled data, with no annotation effort.
In this paper, for the first time, we recognize the fundamental limits of SSL coming from the use of a single-supervisory signal.
We propose a robust and general standard hierarchical learning/training protocol for any SSL baseline.
arXiv Detail & Related papers (2022-10-28T01:06:10Z) - Decoupled Adversarial Contrastive Learning for Self-supervised
Adversarial Robustness [69.39073806630583]
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
We propose a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL)
arXiv Detail & Related papers (2022-07-22T06:30:44Z) - Task Agnostic Representation Consolidation: a Self-supervised based
Continual Learning Approach [14.674494335647841]
We propose a two-stage training paradigm for CL that intertwines task-agnostic and task-specific learning.
We show that our training paradigm can be easily added to memory- or regularization-based approaches.
arXiv Detail & Related papers (2022-07-13T15:16:51Z) - UniVIP: A Unified Framework for Self-Supervised Visual Pre-training [50.87603616476038]
We propose a novel self-supervised framework to learn versatile visual representations on either single-centric-object or non-iconic dataset.
Massive experiments show that UniVIP pre-trained on non-iconic COCO achieves state-of-the-art transfer performance.
Our method can also exploit single-centric-object dataset such as ImageNet and outperforms BYOL by 2.5% with the same pre-training epochs in linear probing.
arXiv Detail & Related papers (2022-03-14T10:04:04Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.