Precise Location Matching Improves Dense Contrastive Learning in Digital
Pathology
- URL: http://arxiv.org/abs/2212.12105v2
- Date: Thu, 23 Mar 2023 01:26:35 GMT
- Title: Precise Location Matching Improves Dense Contrastive Learning in Digital
Pathology
- Authors: Jingwei Zhang, Saarthak Kapse, Ke Ma, Prateek Prasanna, Maria
Vakalopoulou, Joel Saltz, Dimitris Samaras
- Abstract summary: We propose a precise location-based matching mechanism to precisely match regions in two augmentations.
Our method outperforms previous dense matching methods by up to 7.2% in average precision for detection and 5.6% in average precision for instance segmentation tasks.
- Score: 28.62539784951823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense prediction tasks such as segmentation and detection of pathological
entities hold crucial clinical value in computational pathology workflows.
However, obtaining dense annotations on large cohorts is usually tedious and
expensive. Contrastive learning (CL) is thus often employed to leverage large
volumes of unlabeled data to pre-train the backbone network. To boost CL for
dense prediction, some studies have proposed variations of dense matching
objectives in pre-training. However, our analysis shows that employing existing
dense matching strategies on histopathology images enforces invariance among
incorrect pairs of dense features and, thus, is imprecise. To address this, we
propose a precise location-based matching mechanism that utilizes the
overlapping information between geometric transformations to precisely match
regions in two augmentations. Extensive experiments on two pretraining datasets
(TCGA-BRCA, NCT-CRC-HE) and three downstream datasets (GlaS, CRAG, BCSS)
highlight the superiority of our method in semantic and instance segmentation
tasks. Our method outperforms previous dense matching methods by up to 7.2% in
average precision for detection and 5.6% in average precision for instance
segmentation tasks. Additionally, by using our matching mechanism in the three
popular contrastive learning frameworks, MoCo-v2, VICRegL, and ConCL, the
average precision in detection is improved by 0.7% to 5.2%, and the average
precision in segmentation is improved by 0.7% to 4.0%, demonstrating
generalizability. Our code is available at
https://github.com/cvlab-stonybrook/PLM_SSL.
Related papers
- CLAMP-ViT: Contrastive Data-Free Learning for Adaptive Post-Training Quantization of ViTs [6.456189487006878]
We present CLAMP-ViT, a data-free post-training quantization method for vision transformers (ViTs)
We identify the limitations of recent techniques, notably their inability to leverage meaningful inter-patch relationships.
CLAMP-ViT employs a two-stage approach, cyclically adapting between data generation and model quantization.
arXiv Detail & Related papers (2024-07-07T05:39:25Z) - Decoupled Contrastive Learning for Long-Tailed Recognition [58.255966442426484]
Supervised Contrastive Loss (SCL) is popular in visual representation learning.
In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance.
We propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes.
arXiv Detail & Related papers (2024-03-10T09:46:28Z) - SSL-CPCD: Self-supervised learning with composite pretext-class
discrimination for improved generalisability in endoscopic image analysis [3.1542695050861544]
Deep learning-based supervised methods are widely popular in medical image analysis.
They require a large amount of training data and face issues in generalisability to unseen datasets.
We propose to explore patch-level instance-group discrimination and penalisation of inter-class variation using additive angular margin.
arXiv Detail & Related papers (2023-05-31T21:28:08Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Source-free unsupervised domain adaptation for cross-modality abdominal
multi-organ segmentation [10.151144203061778]
It is desirable to transfer the learned knowledge from the source labeled CT dataset to the target unlabeled MR dataset for abdominal multi-organ segmentation.
We propose an effective source-free unsupervised domain adaptation method for cross-modality abdominal multi-organ segmentation without accessing the source dataset.
arXiv Detail & Related papers (2021-11-24T01:42:07Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Semi-supervised Contrastive Learning with Similarity Co-calibration [72.38187308270135]
We propose a novel training strategy, termed as Semi-supervised Contrastive Learning (SsCL)
SsCL combines the well-known contrastive loss in self-supervised learning with the cross entropy loss in semi-supervised learning.
We show that SsCL produces more discriminative representation and is beneficial to few shot learning.
arXiv Detail & Related papers (2021-05-16T09:13:56Z) - Dense Contrastive Learning for Self-Supervised Visual Pre-Training [102.15325936477362]
We present dense contrastive learning, which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images.
Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only 1% slower)
arXiv Detail & Related papers (2020-11-18T08:42:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.