Preservational Learning Improves Self-supervised Medical Image Models by
Reconstructing Diverse Contexts
- URL: http://arxiv.org/abs/2109.04379v1
- Date: Thu, 9 Sep 2021 16:05:55 GMT
- Title: Preservational Learning Improves Self-supervised Medical Image Models by
Reconstructing Diverse Contexts
- Authors: Hong-Yu Zhou, Chixiang Lu, Sibei Yang, Xiaoguang Han, Yizhou Yu
- Abstract summary: We present Preservational Contrastive Representation Learning (PCRL) for learning self-supervised medical representations.
PCRL provides very competitive results under the pretraining-finetuning protocol, outperforming both self-supervised and supervised counterparts in 5 classification/segmentation tasks substantially.
- Score: 58.53111240114021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Preserving maximal information is one of principles of designing
self-supervised learning methodologies. To reach this goal, contrastive
learning adopts an implicit way which is contrasting image pairs. However, we
believe it is not fully optimal to simply use the contrastive estimation for
preservation. Moreover, it is necessary and complemental to introduce an
explicit solution to preserve more information. From this perspective, we
introduce Preservational Learning to reconstruct diverse image contexts in
order to preserve more information in learned representations. Together with
the contrastive loss, we present Preservational Contrastive Representation
Learning (PCRL) for learning self-supervised medical representations. PCRL
provides very competitive results under the pretraining-finetuning protocol,
outperforming both self-supervised and supervised counterparts in 5
classification/segmentation tasks substantially.
Related papers
- ConPro: Learning Severity Representation for Medical Images using Contrastive Learning and Preference Optimization [30.31270613973337]
This paper proposes Con- PrO: a novel representation learning method for severity assessment in medical images.
We show that our representation learning framework offers valuable severity ordering in the feature space.
We also derived discussions on severity indicators and related applications of preference comparison in the medical domain.
arXiv Detail & Related papers (2024-04-29T16:16:42Z) - Multi-organ Self-supervised Contrastive Learning for Breast Lesion
Segmentation [0.0]
This paper employs multi-organ datasets for pre-training models tailored to specific organ-related target tasks.
Our target task is breast tumour segmentation in ultrasound images.
Results show that conventional contrastive learning pre-training improves performance compared to supervised baseline approaches.
arXiv Detail & Related papers (2024-02-21T20:29:21Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - DiRA: Discriminative, Restorative, and Adversarial Learning for
Self-supervised Medical Image Analysis [7.137224324997715]
DiRA is a framework that unites discriminative, restorative, and adversarial learning.
It gleans complementary visual information from unlabeled medical images for semantic representation learning.
arXiv Detail & Related papers (2022-04-21T23:52:52Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Contrastive Learning of Single-Cell Phenotypic Representations for
Treatment Classification [6.4265933507484005]
Drug development efforts typically analyse thousands of cell images to screen for potential treatments.
We leverage a contrastive learning framework to learn appropriate representations from single-cell fluorescent microscopy images.
We observe an improvement of 10% in NCSB accuracy and 11% in NSC-NSCB drop over the previously best unsupervised method.
arXiv Detail & Related papers (2021-03-30T20:29:04Z) - Contrastive Learning based Hybrid Networks for Long-Tailed Image
Classification [31.647639786095993]
We propose a novel hybrid network structure composed of a supervised contrastive loss to learn image representations and a cross-entropy loss to learn classifiers.
Experiments on three long-tailed classification datasets demonstrate the advantage of the proposed contrastive learning based hybrid networks in long-tailed classification.
arXiv Detail & Related papers (2021-03-26T05:22:36Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z) - A Simple Framework for Contrastive Learning of Visual Representations [116.37752766922407]
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
We show that composition of data augmentations plays a critical role in defining effective predictive tasks.
We are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet.
arXiv Detail & Related papers (2020-02-13T18:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.