Lesion-Aware Contrastive Representation Learning for Histopathology
Whole Slide Images Analysis
- URL: http://arxiv.org/abs/2206.13115v1
- Date: Mon, 27 Jun 2022 08:39:51 GMT
- Title: Lesion-Aware Contrastive Representation Learning for Histopathology
Whole Slide Images Analysis
- Authors: Jun Li, Yushan Zheng, Kun Wu, Jun Shi, Fengying Xie, Zhiguo Jiang
- Abstract summary: We propose a novel contrastive representation learning framework named Lesion-Aware Contrastive Learning (LACL) for histopathology whole slide image analysis.
The experimental results demonstrate that LACL achieves the best performance in histopathology image representation learning on different datasets.
- Score: 16.264758789726223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local representation learning has been a key challenge to promote the
performance of the histopathological whole slide images analysis. The previous
representation learning methods followed the supervised learning paradigm.
However, manual annotation for large-scale WSIs is time-consuming and
labor-intensive. Hence, the self-supervised contrastive learning has recently
attracted intensive attention. The present contrastive learning methods treat
each sample as a single class, which suffers from class collision problems,
especially in the domain of histopathology image analysis. In this paper, we
proposed a novel contrastive representation learning framework named
Lesion-Aware Contrastive Learning (LACL) for histopathology whole slide image
analysis. We built a lesion queue based on the memory bank structure to store
the representations of different classes of WSIs, which allowed the contrastive
model to selectively define the negative pairs during the training. Moreover,
We designed a queue refinement strategy to purify the representations stored in
the lesion queue. The experimental results demonstrate that LACL achieves the
best performance in histopathology image representation learning on different
datasets, and outperforms state-of-the-art methods under different WSI
classification benchmarks. The code is available at
https://github.com/junl21/lacl.
Related papers
- A self-supervised framework for learning whole slide representations [52.774822784847565]
We present Slide Pre-trained Transformers (SPT) for gigapixel-scale self-supervision of whole slide images.
We benchmark SPT visual representations on five diagnostic tasks across three biomedical microscopy datasets.
arXiv Detail & Related papers (2024-02-09T05:05:28Z) - Glioma subtype classification from histopathological images using
in-domain and out-of-domain transfer learning: An experimental study [9.161480191416551]
We compare various transfer learning strategies and deep learning architectures for computer-aided classification of adult-type diffuse gliomas.
A semi-supervised learning approach is proposed, where the fine-tuned models are utilized to predict the labels of unannotated regions of the whole slide images.
The models are subsequently retrained using the ground-truth labels and weak labels determined in the previous step, providing superior performance in comparison to standard in-domain transfer learning.
arXiv Detail & Related papers (2023-09-29T13:22:17Z) - GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation [5.049466204159458]
Self-supervised learning (SSL) is an alternative paradigm that provides some respite by constructing models utilizing only the unannotated data.
In this paper, we propose an SSL approach for segmenting histopathological images via generative diffusion models.
Our method is based on the observation that diffusion models effectively solve an image-to-image translation task akin to a segmentation task.
arXiv Detail & Related papers (2023-09-04T09:49:24Z) - Fine-Grained Self-Supervised Learning with Jigsaw Puzzles for Medical
Image Classification [11.320414512937946]
Classifying fine-grained lesions is challenging due to minor and subtle differences in medical images.
We introduce Fine-Grained Self-Supervised Learning(FG-SSL) method for classifying subtle lesions in medical images.
We evaluate the proposed fine-grained self-supervised learning method on comprehensive experiments using various medical image recognition datasets.
arXiv Detail & Related papers (2023-08-10T02:08:15Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - Exemplar Learning for Medical Image Segmentation [38.61378161105941]
We propose an Exemplar Learning-based Synthesis Net (ELSNet) framework for medical image segmentation.
ELSNet introduces two new modules for image segmentation: an exemplar-guided synthesis module and a pixel-prototype based contrastive embedding module.
We conduct experiments on several organ segmentation datasets and present an in-depth analysis.
arXiv Detail & Related papers (2022-04-03T00:10:06Z) - Towards better understanding and better generalization of few-shot
classification in histology images with contrastive learning [7.620702640026243]
Few-shot learning is an established topic in natural images for years, but few work is attended to histology images.
We propose to incorporate contrastive learning (CL) with latent augmentation (LA) to build a few-shot system.
In experiments, we find i) models learned by CL generalize better than supervised learning for histology images in unseen classes, and ii) LA brings consistent gains over baselines.
arXiv Detail & Related papers (2022-02-18T07:48:34Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.