ConCL: Concept Contrastive Learning for Dense Prediction Pre-training in
Pathology Images
- URL: http://arxiv.org/abs/2207.06733v1
- Date: Thu, 14 Jul 2022 08:38:17 GMT
- Title: ConCL: Concept Contrastive Learning for Dense Prediction Pre-training in
Pathology Images
- Authors: Jiawei Yang, Hanbo Chen, Yuan Liang, Junzhou Huang, Lei He, Jianhua
Yao
- Abstract summary: Self-supervised learning is appealing to such annotation-heavy tasks.
We first benchmark representative SSL methods for dense prediction tasks in pathology images.
We propose concept contrastive learning (ConCL), an SSL framework for dense pre-training.
- Score: 47.43840961882509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detectingandsegmentingobjectswithinwholeslideimagesis essential in
computational pathology workflow. Self-supervised learning (SSL) is appealing
to such annotation-heavy tasks. Despite the extensive benchmarks in natural
images for dense tasks, such studies are, unfortunately, absent in current
works for pathology. Our paper intends to narrow this gap. We first benchmark
representative SSL methods for dense prediction tasks in pathology images.
Then, we propose concept contrastive learning (ConCL), an SSL framework for
dense pre-training. We explore how ConCL performs with concepts provided by
different sources and end up with proposing a simple dependency-free concept
generating method that does not rely on external segmentation algorithms or
saliency detection models. Extensive experiments demonstrate the superiority of
ConCL over previous state-of-the-art SSL methods across different settings.
Along our exploration, we distll several important and intriguing components
contributing to the success of dense pre-training for pathology images. We hope
this work could provide useful data points and encourage the community to
conduct ConCL pre-training for problems of interest. Code is available.
Related papers
- CoBooM: Codebook Guided Bootstrapping for Medical Image Representation Learning [6.838695126692698]
Self-supervised learning has emerged as a promising paradigm for medical image analysis by harnessing unannotated data.
Existing SSL approaches overlook the high anatomical similarity inherent in medical images.
We propose CoBooM, a novel framework for self-supervised medical image learning by integrating continuous and discrete representations.
arXiv Detail & Related papers (2024-08-08T06:59:32Z) - OPTiML: Dense Semantic Invariance Using Optimal Transport for Self-Supervised Medical Image Representation [6.4136876268620115]
Self-supervised learning (SSL) has emerged as a promising technique for medical image analysis due to its ability to learn without annotations.
We introduce a novel SSL framework OPTiML, employing optimal transport (OT), to capture the dense semantic invariance and fine-grained details.
Our empirical results reveal OPTiML's superiority over state-of-the-art methods across all evaluated tasks.
arXiv Detail & Related papers (2024-04-18T02:59:48Z) - Enhancing Few-shot CLIP with Semantic-Aware Fine-Tuning [61.902254546858465]
Methods based on Contrastive Language-Image Pre-training have exhibited promising performance in few-shot adaptation tasks.
We propose fine-tuning the parameters of the attention pooling layer during the training process to encourage the model to focus on task-specific semantics.
arXiv Detail & Related papers (2023-11-08T05:18:57Z) - GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation [5.049466204159458]
Self-supervised learning (SSL) is an alternative paradigm that provides some respite by constructing models utilizing only the unannotated data.
In this paper, we propose an SSL approach for segmenting histopathological images via generative diffusion models.
Our method is based on the observation that diffusion models effectively solve an image-to-image translation task akin to a segmentation task.
arXiv Detail & Related papers (2023-09-04T09:49:24Z) - Understanding and Improving the Role of Projection Head in
Self-Supervised Learning [77.59320917894043]
Self-supervised learning (SSL) aims to produce useful feature representations without access to human-labeled data annotations.
Current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective.
This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training?
arXiv Detail & Related papers (2022-12-22T05:42:54Z) - Benchmarking Self-Supervised Learning on Diverse Pathology Datasets [10.868779327544688]
Self-supervised learning has shown to be an effective method for utilizing unlabeled data.
We execute the largest-scale study of SSL pre-training on pathology image data.
For the first time, we apply SSL to the challenging task of nuclei instance segmentation.
arXiv Detail & Related papers (2022-12-09T06:38:34Z) - Non-Contrastive Learning Meets Language-Image Pre-Training [145.6671909437841]
We study the validity of non-contrastive language-image pre-training (nCLIP)
We introduce xCLIP, a multi-tasking framework combining CLIP and nCLIP, and show that nCLIP aids CLIP in enhancing feature semantics.
arXiv Detail & Related papers (2022-10-17T17:57:46Z) - Data-Limited Tissue Segmentation using Inpainting-Based Self-Supervised
Learning [3.7931881761831328]
Self-supervised learning (SSL) methods involving pretext tasks have shown promise in overcoming this requirement by first pretraining models using unlabeled data.
We evaluate the efficacy of two SSL methods (inpainting-based pretext tasks of context prediction and context restoration) for CT and MRI image segmentation in label-limited scenarios.
We demonstrate that optimally trained and easy-to-implement SSL segmentation models can outperform classically supervised methods for MRI and CT tissue segmentation in label-limited scenarios.
arXiv Detail & Related papers (2022-10-14T16:34:05Z) - DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting [91.56988987393483]
We present a new framework for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP.
Specifically, we convert the original image-text matching problem in CLIP to a pixel-text matching problem and use the pixel-text score maps to guide the learning of dense prediction models.
Our method is model-agnostic, which can be applied to arbitrary dense prediction systems and various pre-trained visual backbones.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - DenseCLIP: Extract Free Dense Labels from CLIP [130.3830819077699]
Contrastive Language-Image Pre-training (CLIP) has made a remarkable breakthrough in open-vocabulary zero-shot image recognition.
DenseCLIP+ surpasses SOTA transductive zero-shot semantic segmentation methods by large margins.
Our finding suggests that DenseCLIP can serve as a new reliable source of supervision for dense prediction tasks.
arXiv Detail & Related papers (2021-12-02T09:23:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.