Learning with less labels in Digital Pathology via Scribble Supervision
from natural images
- URL: http://arxiv.org/abs/2201.02627v1
- Date: Fri, 7 Jan 2022 18:12:34 GMT
- Title: Learning with less labels in Digital Pathology via Scribble Supervision
from natural images
- Authors: Eu Wern Teh, Graham W. Taylor
- Abstract summary: Cross-domain transfer learning from the natural image domain (NI) to the Digital Pathology domain is shown to be successful.
We show that models trained with scribble labels yield the same performance boost as full pixel-wise segmentation labels.
- Score: 20.298424229156506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A critical challenge of training deep learning models in the Digital
Pathology (DP) domain is the high annotation cost by medical experts. One way
to tackle this issue is via transfer learning from the natural image domain
(NI), where the annotation cost is considerably cheaper. Cross-domain transfer
learning from NI to DP is shown to be successful via class
labels~\cite{teh2020learning}. One potential weakness of relying on class
labels is the lack of spatial information, which can be obtained from spatial
labels such as full pixel-wise segmentation labels and scribble labels. We
demonstrate that scribble labels from NI domain can boost the performance of DP
models on two cancer classification datasets (Patch Camelyon Breast Cancer and
Colorectal Cancer dataset). Furthermore, we show that models trained with
scribble labels yield the same performance boost as full pixel-wise
segmentation labels despite being significantly easier and faster to collect.
Related papers
- Few-shot Class-Incremental Semantic Segmentation via Pseudo-Labeling and
Knowledge Distillation [3.4436201325139737]
We address the problem of learning new classes for semantic segmentation models from few examples.
For learning from limited data, we propose a pseudo-labeling strategy to augment the few-shot training annotations.
We integrate the above steps into a single convolutional neural network with a unified learning objective.
arXiv Detail & Related papers (2023-08-05T05:05:37Z) - Distilling Self-Supervised Vision Transformers for Weakly-Supervised
Few-Shot Classification & Segmentation [58.03255076119459]
We address the task of weakly-supervised few-shot image classification and segmentation, by leveraging a Vision Transformer (ViT)
Our proposed method takes token representations from the self-supervised ViT and leverages their correlations, via self-attention, to produce classification and segmentation predictions.
Experiments on Pascal-5i and COCO-20i demonstrate significant performance gains in a variety of supervision settings.
arXiv Detail & Related papers (2023-07-07T06:16:43Z) - Structured Semantic Transfer for Multi-Label Recognition with Partial
Labels [85.6967666661044]
We propose a structured semantic transfer (SST) framework that enables training multi-label recognition models with partial labels.
The framework consists of two complementary transfer modules that explore within-image and cross-image semantic correlations.
Experiments on the Microsoft COCO, Visual Genome and Pascal VOC datasets show that the proposed SST framework obtains superior performance over current state-of-the-art algorithms.
arXiv Detail & Related papers (2021-12-21T02:15:01Z) - Semi-supervised Contrastive Learning for Label-efficient Medical Image
Segmentation [11.935891325600952]
We propose a supervised local contrastive loss that leverages limited pixel-wise annotation to force pixels with the same label to gather around in the embedding space.
With different amounts of labeled data, our methods consistently outperform the state-of-the-art contrast-based methods and other semi-supervised learning techniques.
arXiv Detail & Related papers (2021-09-15T16:23:48Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Knowledge-Guided Multi-Label Few-Shot Learning for General Image
Recognition [75.44233392355711]
KGGR framework exploits prior knowledge of statistical label correlations with deep neural networks.
It first builds a structured knowledge graph to correlate different labels based on statistical label co-occurrence.
Then, it introduces the label semantics to guide learning semantic-specific features.
It exploits a graph propagation network to explore graph node interactions.
arXiv Detail & Related papers (2020-09-20T15:05:29Z) - Superpixel-Guided Label Softening for Medical Image Segmentation [31.989873877526424]
We propose superpixel-based label softening for medical image segmentation.
We show that this method achieves overall superior segmentation performances to baseline and comparison methods for both 3D and 2D medical images.
arXiv Detail & Related papers (2020-07-17T10:55:59Z) - PCAMs: Weakly Supervised Semantic Segmentation Using Point Supervision [12.284208932393073]
This paper presents a novel procedure for producing semantic segmentation from images given some point level annotations.
We propose training a CNN that is normally fully supervised using our pseudo labels in place of ground truth labels.
Our method achieves state of the art results for point supervised semantic segmentation on the PASCAL VOC 2012 dataset citeeveringham2010pascal, even outperforming state of the art methods for stronger bounding box and squiggle supervision.
arXiv Detail & Related papers (2020-07-10T21:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.