A Histopathology Study Comparing Contrastive Semi-Supervised and Fully
Supervised Learning
- URL: http://arxiv.org/abs/2111.05882v1
- Date: Wed, 10 Nov 2021 19:04:08 GMT
- Title: A Histopathology Study Comparing Contrastive Semi-Supervised and Fully
Supervised Learning
- Authors: Lantian Zhang (1 and 2), Mohamed Amgad (2), Lee A.D. Cooper (2) ((1)
North Shore Country Day, Winnetka, IL, USA, (2) Department of Pathology,
Northwestern University, Chicago, IL, USA)
- Abstract summary: We explore self-supervised learning to reduce labeling burdens in computational pathology.
We find that ImageNet pre-trained networks largely outperform the self-supervised representations obtained using Barlow Twins.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data labeling is often the most challenging task when developing
computational pathology models. Pathologist participation is necessary to
generate accurate labels, and the limitations on pathologist time and demand
for large, labeled datasets has led to research in areas including weakly
supervised learning using patient-level labels, machine assisted annotation and
active learning. In this paper we explore self-supervised learning to reduce
labeling burdens in computational pathology. We explore this in the context of
classification of breast cancer tissue using the Barlow Twins approach, and we
compare self-supervision with alternatives like pre-trained networks in
low-data scenarios. For the task explored in this paper, we find that ImageNet
pre-trained networks largely outperform the self-supervised representations
obtained using Barlow Twins.
Related papers
- A Review of Pseudo-Labeling for Computer Vision [2.79239659248295]
Deep neural networks often require large datasets of labeled samples to generalize effectively.
An important area of active research is semi-supervised learning, which attempts to instead utilize large quantities of (easily acquired) unlabeled samples.
In this work we explore a broader interpretation of pseudo-labels within both self-supervised and unsupervised methods.
arXiv Detail & Related papers (2024-08-13T22:17:48Z) - Automated Labeling of German Chest X-Ray Radiology Reports using Deep
Learning [50.591267188664666]
We propose a deep learning-based CheXpert label prediction model, pre-trained on reports labeled by a rule-based German CheXpert model.
Our results demonstrate the effectiveness of our approach, which significantly outperformed the rule-based model on all three tasks.
arXiv Detail & Related papers (2023-06-09T16:08:35Z) - Weakly Supervised Intracranial Hemorrhage Segmentation using Head-Wise
Gradient-Infused Self-Attention Maps from a Swin Transformer in Categorical
Learning [0.6269243524465492]
Intracranial hemorrhage (ICH) is a life-threatening medical emergency that requires timely diagnosis and accurate treatment.
Deep learning techniques have emerged as the leading approach for medical image analysis and processing.
We introduce a novel weakly supervised method for ICH segmentation, utilizing a Swin transformer trained on an ICH classification task with categorical labels.
arXiv Detail & Related papers (2023-04-11T00:17:34Z) - Human-machine Interactive Tissue Prototype Learning for Label-efficient
Histopathology Image Segmentation [18.755759024796216]
Deep neural networks have greatly advanced histopathology image segmentation but usually require abundant data.
We present a label-efficient tissue prototype dictionary building pipeline and propose to use the obtained prototypes to guide histopathology image segmentation.
We show that our human-machine interactive tissue prototype learning method can achieve comparable segmentation performance as the fully-supervised baselines.
arXiv Detail & Related papers (2022-11-26T06:17:21Z) - Evaluating the Robustness of Self-Supervised Learning in Medical Imaging [57.20012795524752]
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets.
We show that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging.
arXiv Detail & Related papers (2021-05-14T17:49:52Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - DSAL: Deeply Supervised Active Learning from Strong and Weak Labelers
for Biomedical Image Segmentation [13.707848142719424]
We propose a deep active semi-supervised learning framework, DSAL, combining active learning and semi-supervised learning strategies.
In DSAL, a new criterion based on deep supervision mechanism is proposed to select informative samples with high uncertainties.
We use the proposed criteria to select samples for strong and weak labelers to produce oracle labels and pseudo labels simultaneously at each active learning iteration.
arXiv Detail & Related papers (2021-01-22T11:31:33Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - A Teacher-Student Framework for Semi-supervised Medical Image
Segmentation From Mixed Supervision [62.4773770041279]
We develop a semi-supervised learning framework based on a teacher-student fashion for organ and lesion segmentation.
We show our model is robust to the quality of bounding box and achieves comparable performance compared with full-supervised learning methods.
arXiv Detail & Related papers (2020-10-23T07:58:20Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.