Contrastive Learning Meets Transfer Learning: A Case Study In Medical
Image Analysis
- URL: http://arxiv.org/abs/2103.03166v1
- Date: Thu, 4 Mar 2021 17:19:54 GMT
- Title: Contrastive Learning Meets Transfer Learning: A Case Study In Medical
Image Analysis
- Authors: Yuzhe Lu, Aadarsh Jha, and Yuankai Huo
- Abstract summary: Annotated medical images are typically rarer than labeled natural images since they are limited by domain knowledge and privacy constraints.
Recent advances in transfer and contrastive learning have provided effective solutions to tackle such issues from different perspectives.
It would be appealing to accelerate contrastive learning with transfer learning, given that slow convergence speed is a critical limitation of modern contrastive learning approaches.
- Score: 2.4050073971195003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Annotated medical images are typically rarer than labeled natural images
since they are limited by domain knowledge and privacy constraints. Recent
advances in transfer and contrastive learning have provided effective solutions
to tackle such issues from different perspectives. The state-of-the-art
transfer learning (e.g., Big Transfer (BiT)) and contrastive learning (e.g.,
Simple Siamese Contrastive Learning (SimSiam)) approaches have been
investigated independently, without considering the complementary nature of
such techniques. It would be appealing to accelerate contrastive learning with
transfer learning, given that slow convergence speed is a critical limitation
of modern contrastive learning approaches. In this paper, we investigate the
feasibility of aligning BiT with SimSiam. From empirical analyses, different
normalization techniques (Group Norm in BiT vs. Batch Norm in SimSiam) are the
key hurdle of adapting BiT to SimSiam. When combining BiT with SimSiam, we
evaluated the performance of using BiT, SimSiam, and BiT+SimSiam on CIFAR-10
and HAM10000 datasets. The results suggest that the BiT models accelerate the
convergence speed of SimSiam. When used together, the model gives superior
performance over both of its counterparts. We hope this study will motivate
researchers to revisit the task of aggregating big pre-trained models with
contrastive learning models for image analysis.
Related papers
- Robust image representations with counterfactual contrastive learning [17.273155534515393]
We introduce counterfactual contrastive learning, a novel framework leveraging recent advances in causal image synthesis.
Our method, evaluated across five datasets, outperforms standard contrastive learning in terms of robustness to acquisition shift.
Further experiments show that the proposed framework extends beyond acquisition shifts, with models trained with counterfactual contrastive learning substantially improving subgroup performance across biological sex.
arXiv Detail & Related papers (2024-09-16T15:11:00Z) - SimMAT: Exploring Transferability from Vision Foundation Models to Any Image Modality [136.82569085134554]
Foundation models like ChatGPT and Sora that are trained on a huge scale of data have made a revolutionary social impact.
It is extremely challenging for sensors in many different fields to collect similar scales of natural images to train strong foundation models.
This work presents a simple and effective framework SimMAT to study an open problem: the transferability from vision foundation models trained on natural RGB images to other image modalities of different physical properties.
arXiv Detail & Related papers (2024-09-12T14:38:21Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Symmetrical Bidirectional Knowledge Alignment for Zero-Shot Sketch-Based
Image Retrieval [69.46139774646308]
This paper studies the problem of zero-shot sketch-based image retrieval (ZS-SBIR)
It aims to use sketches from unseen categories as queries to match the images of the same category.
We propose a novel Symmetrical Bidirectional Knowledge Alignment for zero-shot sketch-based image retrieval (SBKA)
arXiv Detail & Related papers (2023-12-16T04:50:34Z) - Graph-Aware Contrasting for Multivariate Time-Series Classification [50.84488941336865]
Existing contrastive learning methods mainly focus on achieving temporal consistency with temporal augmentation and contrasting techniques.
We propose Graph-Aware Contrasting for spatial consistency across MTS data.
Our proposed method achieves state-of-the-art performance on various MTS classification tasks.
arXiv Detail & Related papers (2023-09-11T02:35:22Z) - Hallucination Improves the Performance of Unsupervised Visual
Representation Learning [9.504503675097137]
We propose Hallucinator that could efficiently generate additional positive samples for further contrast.
The Hallucinator is differentiable and creates new data in the feature space.
Remarkably, we empirically prove that the proposed Hallucinator generalizes well to various contrastive learning models.
arXiv Detail & Related papers (2023-07-22T21:15:56Z) - Semantically Contrastive Learning for Low-light Image Enhancement [48.71522073014808]
Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images.
We propose an effective semantically contrastive learning paradigm for LLE (namely SCL-LLE)
Our method surpasses the state-of-the-arts LLE models over six independent cross-scenes datasets.
arXiv Detail & Related papers (2021-12-13T07:08:33Z) - SimTriplet: Simple Triplet Representation Learning with a Single GPU [4.793871743112708]
We propose a simple triplet representation learning (SimTriplet) approach on pathological images.
By learning from 79,000 unlabeled pathological patch images, SimTriplet achieved 10.58% better performance compared with supervised learning.
arXiv Detail & Related papers (2021-03-09T17:46:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.