Multi-Layer Pseudo-Supervision for Histopathology Tissue Semantic
Segmentation using Patch-level Classification Labels
- URL: http://arxiv.org/abs/2110.08048v1
- Date: Thu, 14 Oct 2021 08:02:07 GMT
- Title: Multi-Layer Pseudo-Supervision for Histopathology Tissue Semantic
Segmentation using Patch-level Classification Labels
- Authors: Chu Han, Jiatai Lin, Jinhai Mai, Yi Wang, Qingling Zhang, Bingchao
Zhao, Xin Chen, Xipeng Pan, Zhenwei Shi, Xiaowei Xu, Su Yao, Lixu Yan, Huan
Lin, Zeyan Xu, Xiaomei Huang, Guoqiang Han, Changhong Liang, Zaiyi Liu
- Abstract summary: In this paper, we use only patch-level classification labels to achieve tissue semantic segmentation on histopathology images.
Several technical novelties have been proposed to reduce the information gap between pixel-level and patch-level annotations.
Our proposed model outperforms two state-of-the-art WSSS approaches.
- Score: 26.349051136954195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tissue-level semantic segmentation is a vital step in computational
pathology. Fully-supervised models have already achieved outstanding
performance with dense pixel-level annotations. However, drawing such labels on
the giga-pixel whole slide images is extremely expensive and time-consuming. In
this paper, we use only patch-level classification labels to achieve tissue
semantic segmentation on histopathology images, finally reducing the annotation
efforts. We proposed a two-step model including a classification and a
segmentation phases. In the classification phase, we proposed a CAM-based model
to generate pseudo masks by patch-level labels. In the segmentation phase, we
achieved tissue semantic segmentation by our proposed Multi-Layer
Pseudo-Supervision. Several technical novelties have been proposed to reduce
the information gap between pixel-level and patch-level annotations. As a part
of this paper, we introduced a new weakly-supervised semantic segmentation
(WSSS) dataset for lung adenocarcinoma (LUAD-HistoSeg). We conducted several
experiments to evaluate our proposed model on two datasets. Our proposed model
outperforms two state-of-the-art WSSS approaches. Note that we can achieve
comparable quantitative and qualitative results with the fully-supervised
model, with only around a 2\% gap for MIoU and FwIoU. By comparing with manual
labeling, our model can greatly save the annotation time from hours to minutes.
The source code is available at: \url{https://github.com/ChuHan89/WSSS-Tissue}.
Related papers
- Unsupervised Universal Image Segmentation [59.0383635597103]
We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
arXiv Detail & Related papers (2023-12-28T18:59:04Z) - Dataset Diffusion: Diffusion-based Synthetic Dataset Generation for
Pixel-Level Semantic Segmentation [6.82236459614491]
We propose a novel method for generating pixel-level semantic segmentation labels using the text-to-image generative model Stable Diffusion.
By utilizing the text prompts, cross-attention, and self-attention of SD, we introduce three new techniques: class-prompt appending, class-prompt cross-attention, and self-attention exponentiation.
These techniques enable us to generate segmentation maps corresponding to synthetic images.
arXiv Detail & Related papers (2023-09-25T17:19:26Z) - Pointly-Supervised Panoptic Segmentation [106.68888377104886]
We propose a new approach to applying point-level annotations for weakly-supervised panoptic segmentation.
Instead of the dense pixel-level labels used by fully supervised methods, point-level labels only provide a single point for each target as supervision.
We formulate the problem in an end-to-end framework by simultaneously generating panoptic pseudo-masks from point-level labels and learning from them.
arXiv Detail & Related papers (2022-10-25T12:03:51Z) - Novel Class Discovery in Semantic Segmentation [104.30729847367104]
We introduce a new setting of Novel Class Discovery in Semantic (NCDSS)
It aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes.
In NCDSS, we need to distinguish the objects and background, and to handle the existence of multiple classes within an image.
We propose the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
arXiv Detail & Related papers (2021-12-03T13:31:59Z) - Reference-guided Pseudo-Label Generation for Medical Semantic
Segmentation [25.76014072179711]
We propose a novel approach to generate supervision for semi-supervised semantic segmentation.
We use a small number of labeled images as reference material and match pixels in an unlabeled image to the semantics of the best fitting pixel in a reference set.
We achieve the same performance as a standard fully supervised model on X-ray anatomy segmentation, albeit 95% fewer labeled images.
arXiv Detail & Related papers (2021-12-01T12:21:24Z) - Weakly Supervised Medical Image Segmentation [2.355970984550866]
We propose a novel approach for few-shot semantic segmentation with sparse labeled images.
We use sparse labels in the meta-training and dense labels in the meta-test, thus making the model learn to predict dense labels from sparse ones.
arXiv Detail & Related papers (2021-08-12T00:15:47Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Learning Whole-Slide Segmentation from Inexact and Incomplete Labels
using Tissue Graphs [11.315178576537768]
We propose SegGini, a weakly supervised semantic segmentation method using graphs.
SegGini segment arbitrary and large images, scaling from tissue microarray (TMA) to whole slide image (WSI)
arXiv Detail & Related papers (2021-03-04T16:04:24Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.