S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised
Learning Through Hierarchical Contrastive Learning
- URL: http://arxiv.org/abs/2203.07307v1
- Date: Mon, 14 Mar 2022 17:10:01 GMT
- Title: S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised
Learning Through Hierarchical Contrastive Learning
- Authors: Manuel Tran, Sophia J. Wagner, Melanie Boxberg, Tingying Peng
- Abstract summary: We introduce S5CL, a unified framework for fully-supervised, self-supervised, and semi-supervised learning.
With three contrastive losses defined for labeled, unlabeled, and pseudo-labeled images, S5CL can learn feature representations that reflect the hierarchy of distance relationships.
- Score: 0.22940141855172028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In computational pathology, we often face a scarcity of annotations and a
large amount of unlabeled data. One method for dealing with this is
semi-supervised learning which is commonly split into a self-supervised pretext
task and a subsequent model fine-tuning. Here, we compress this two-stage
training into one by introducing S5CL, a unified framework for
fully-supervised, self-supervised, and semi-supervised learning. With three
contrastive losses defined for labeled, unlabeled, and pseudo-labeled images,
S5CL can learn feature representations that reflect the hierarchy of distance
relationships: similar images and augmentations are embedded the closest,
followed by different looking images of the same class, while images from
separate classes have the largest distance. Moreover, S5CL allows us to
flexibly combine these losses to adapt to different scenarios. Evaluations of
our framework on two public histopathological datasets show strong improvements
in the case of sparse labels: for a H&E-stained colorectal cancer dataset, the
accuracy increases by up to 9% compared to supervised cross-entropy loss; for a
highly imbalanced dataset of single white blood cells from leukemia patient
blood smears, the F1-score increases by up to 6%.
Related papers
- Multi-Scale Cross Contrastive Learning for Semi-Supervised Medical Image
Segmentation [14.536384387956527]
We develop a novel Multi-Scale Cross Supervised Contrastive Learning framework to segment structures in medical images.
Our approach contrasts multi-scale features based on ground-truth and cross-predicted labels, in order to extract robust feature representations.
It outperforms state-of-the-art semi-supervised methods by more than 3.0% in Dice.
arXiv Detail & Related papers (2023-06-25T16:55:32Z) - SSL-CPCD: Self-supervised learning with composite pretext-class
discrimination for improved generalisability in endoscopic image analysis [3.1542695050861544]
Deep learning-based supervised methods are widely popular in medical image analysis.
They require a large amount of training data and face issues in generalisability to unseen datasets.
We propose to explore patch-level instance-group discrimination and penalisation of inter-class variation using additive angular margin.
arXiv Detail & Related papers (2023-05-31T21:28:08Z) - Semi-Supervised Relational Contrastive Learning [8.5285439285139]
We present a novel semi-supervised learning model that leverages self-supervised contrastive loss and consistency.
We validate against the ISIC 2018 Challenge benchmark skin lesion classification and demonstrate the effectiveness of our method on varying amounts of labeled data.
arXiv Detail & Related papers (2023-04-11T08:14:30Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Self-supervised contrastive learning of echocardiogram videos enables
label-efficient cardiac disease diagnosis [48.64462717254158]
We developed a self-supervised contrastive learning approach, EchoCLR, to catered to echocardiogram videos.
When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS)
EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets.
arXiv Detail & Related papers (2022-07-23T19:17:26Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Semi-supervised Contrastive Learning with Similarity Co-calibration [72.38187308270135]
We propose a novel training strategy, termed as Semi-supervised Contrastive Learning (SsCL)
SsCL combines the well-known contrastive loss in self-supervised learning with the cross entropy loss in semi-supervised learning.
We show that SsCL produces more discriminative representation and is beneficial to few shot learning.
arXiv Detail & Related papers (2021-05-16T09:13:56Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.