Learning Representations with Contrastive Self-Supervised Learning for
Histopathology Applications
- URL: http://arxiv.org/abs/2112.05760v1
- Date: Fri, 10 Dec 2021 16:08:57 GMT
- Title: Learning Representations with Contrastive Self-Supervised Learning for
Histopathology Applications
- Authors: Karin Stacke, Jonas Unger, Claes Lundstr\"om, Gabriel Eilertsen
- Abstract summary: We show how contrastive self-supervised learning can reduce the annotation effort within digital pathology.
Our results pave the way for realizing the full potential of self-supervised learning for histopathology applications.
- Score: 8.69535649683089
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised learning has made substantial progress over the last few years,
especially by means of contrastive self-supervised learning. The dominating
dataset for benchmarking self-supervised learning has been ImageNet, for which
recent methods are approaching the performance achieved by fully supervised
training. The ImageNet dataset is however largely object-centric, and it is not
clear yet what potential those methods have on widely different datasets and
tasks that are not object-centric, such as in digital pathology. While
self-supervised learning has started to be explored within this area with
encouraging results, there is reason to look closer at how this setting differs
from natural images and ImageNet. In this paper we make an in-depth analysis of
contrastive learning for histopathology, pin-pointing how the contrastive
objective will behave differently due to the characteristics of histopathology
data. We bring forward a number of considerations, such as view generation for
the contrastive objective and hyper-parameter tuning. In a large battery of
experiments, we analyze how the downstream performance in tissue classification
will be affected by these considerations. The results point to how contrastive
learning can reduce the annotation effort within digital pathology, but that
the specific dataset characteristics need to be considered. To take full
advantage of the contrastive learning objective, different calibrations of view
generation and hyper-parameters are required. Our results pave the way for
realizing the full potential of self-supervised learning for histopathology
applications.
Related papers
- Multi-organ Self-supervised Contrastive Learning for Breast Lesion
Segmentation [0.0]
This paper employs multi-organ datasets for pre-training models tailored to specific organ-related target tasks.
Our target task is breast tumour segmentation in ultrasound images.
Results show that conventional contrastive learning pre-training improves performance compared to supervised baseline approaches.
arXiv Detail & Related papers (2024-02-21T20:29:21Z) - Graph Self-Supervised Learning for Endoscopic Image Matching [1.8275108630751844]
We propose a novel self-supervised approach that combines Convolutional Neural Networks for capturing local visual appearance and attention-based Graph Neural Networks for modeling spatial relationships between key-points.
Our approach is trained in a fully self-supervised scheme without the need for labeled data.
Our approach outperforms state-of-the-art handcrafted and deep learning-based methods, demonstrating exceptional performance in terms of precision rate (1) and matching score (99.3%)
arXiv Detail & Related papers (2023-06-19T19:53:41Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology [5.164102666113966]
We conduct a search for good representations in pathology by training a variety of self-supervised models with validation on a variety of weakly-supervised and patch-level tasks.
Our key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images.
arXiv Detail & Related papers (2022-03-01T16:14:41Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Supervision and Source Domain Impact on Representation Learning: A
Histopathology Case Study [6.762603053858596]
In this work, we explored the performance of a deep neural network and triplet loss in the area of representation learning.
We investigated the notion of similarity and dissimilarity in pathology whole-slide images and compared different setups from unsupervised and semi-supervised to supervised learning.
We achieved high accuracy and generalization when the learned representations were applied to two different pathology datasets.
arXiv Detail & Related papers (2020-05-10T21:27:38Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.