Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images
- URL: http://arxiv.org/abs/2108.02122v1
- Date: Wed, 4 Aug 2021 15:50:09 GMT
- Title: Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images
- Authors: Boon Peng Yap, Beng Koon Ng
- Abstract summary: We propose a semi-weakly supervised contrastive learning framework for representation learning using semi-weakly annotated images.
We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets.
- Score: 0.2538209532048867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the value of weak labels in learning transferable representations
for medical images. Compared to hand-labeled datasets, weak or inexact labels
can be acquired in large quantities at significantly lower cost and can provide
useful training signals for data-hungry models such as deep neural networks. We
consider weak labels in the form of pseudo-labels and propose a semi-weakly
supervised contrastive learning (SWCL) framework for representation learning
using semi-weakly annotated images. Specifically, we train a semi-supervised
model to propagate labels from a small dataset consisting of diverse
image-level annotations to a large unlabeled dataset. Using the propagated
labels, we generate a patch-level dataset for pretraining and formulate a
multi-label contrastive learning objective to capture position-specific
features encoded in each patch. We empirically validate the transfer learning
performance of SWCL on seven public retinal fundus datasets, covering three
disease classification tasks and two anatomical structure segmentation tasks.
Our experiment results suggest that, under very low data regime, large-scale
ImageNet pretraining on improved architecture remains a very strong baseline,
and recently proposed self-supervised methods falter in segmentation tasks,
possibly due to the strong invariant constraint imposed. Our method surpasses
all prior self-supervised methods and standard cross-entropy training, while
closing the gaps with ImageNet pretraining.
Related papers
- A Semi-Paired Approach For Label-to-Image Translation [6.888253564585197]
We introduce the first semi-supervised (semi-paired) framework for label-to-image translation.
In the semi-paired setting, the model has access to a small set of paired data and a larger set of unpaired images and labels.
We propose a training algorithm for this shared network, and we present a rare classes sampling algorithm to focus on under-represented classes.
arXiv Detail & Related papers (2023-06-23T16:13:43Z) - Semi-Supervised Image Captioning by Adversarially Propagating Labeled
Data [95.0476489266988]
We present a novel data-efficient semi-supervised framework to improve the generalization of image captioning models.
Our proposed method trains a captioner to learn from a paired data and to progressively associate unpaired data.
Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired dataset.
arXiv Detail & Related papers (2023-01-26T15:25:43Z) - Towards Automated Polyp Segmentation Using Weakly- and Semi-Supervised
Learning and Deformable Transformers [8.01814397869811]
Polyp segmentation is a crucial step towards computer-aided diagnosis of colorectal cancer.
Most of the polyp segmentation methods require pixel-wise annotated datasets.
We propose a novel framework that can be trained using only weakly annotated images along with exploiting unlabeled images.
arXiv Detail & Related papers (2022-11-21T20:44:12Z) - Learning Self-Supervised Low-Rank Network for Single-Stage Weakly and
Semi-Supervised Semantic Segmentation [119.009033745244]
This paper presents a Self-supervised Low-Rank Network ( SLRNet) for single-stage weakly supervised semantic segmentation (WSSS) and semi-supervised semantic segmentation (SSSS)
SLRNet uses cross-view self-supervision, that is, it simultaneously predicts several attentive LR representations from different views of an image to learn precise pseudo-labels.
Experiments on the Pascal VOC 2012, COCO, and L2ID datasets demonstrate that our SLRNet outperforms both state-of-the-art WSSS and SSSS methods with a variety of different settings.
arXiv Detail & Related papers (2022-03-19T09:19:55Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels [6.349708371894538]
We propose to adapt contrastive learning to work with meta-label annotations.
We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training.
Results on three different medical image segmentation datasets show that our approach highly boosts the performance of a model trained on a few scans.
arXiv Detail & Related papers (2021-07-29T04:30:46Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - PseudoSeg: Designing Pseudo Labels for Semantic Segmentation [78.35515004654553]
We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
arXiv Detail & Related papers (2020-10-19T17:59:30Z) - Semi-supervised deep learning based on label propagation in a 2D
embedded space [117.9296191012968]
Proposed solutions propagate labels from a small set of supervised images to a large set of unsupervised ones to train a deep neural network model.
We present a loop in which a deep neural network (VGG-16) is trained from a set with more correctly labeled samples along iterations.
As the labeled set improves along iterations, it improves the features of the neural network.
arXiv Detail & Related papers (2020-08-02T20:08:54Z) - Semi-supervised few-shot learning for medical image segmentation [21.349705243254423]
Recent attempts to alleviate the need for large annotated datasets have developed training strategies under the few-shot learning paradigm.
We propose a novel few-shot learning framework for semantic segmentation, where unlabeled images are also made available at each episode.
We show that including unlabeled surrogate tasks in the episodic training leads to more powerful feature representations.
arXiv Detail & Related papers (2020-03-18T20:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.