Human-machine Interactive Tissue Prototype Learning for Label-efficient
Histopathology Image Segmentation
- URL: http://arxiv.org/abs/2211.14491v1
- Date: Sat, 26 Nov 2022 06:17:21 GMT
- Title: Human-machine Interactive Tissue Prototype Learning for Label-efficient
Histopathology Image Segmentation
- Authors: Wentao Pan, Jiangpeng Yan, Hanbo Chen, Jiawei Yang, Zhe Xu, Xiu Li,
Jianhua Yao
- Abstract summary: Deep neural networks have greatly advanced histopathology image segmentation but usually require abundant data.
We present a label-efficient tissue prototype dictionary building pipeline and propose to use the obtained prototypes to guide histopathology image segmentation.
We show that our human-machine interactive tissue prototype learning method can achieve comparable segmentation performance as the fully-supervised baselines.
- Score: 18.755759024796216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep neural networks have greatly advanced histopathology image
segmentation but usually require abundant annotated data. However, due to the
gigapixel scale of whole slide images and pathologists' heavy daily workload,
obtaining pixel-level labels for supervised learning in clinical practice is
often infeasible. Alternatively, weakly-supervised segmentation methods have
been explored with less laborious image-level labels, but their performance is
unsatisfactory due to the lack of dense supervision. Inspired by the recent
success of self-supervised learning methods, we present a label-efficient
tissue prototype dictionary building pipeline and propose to use the obtained
prototypes to guide histopathology image segmentation. Particularly, taking
advantage of self-supervised contrastive learning, an encoder is trained to
project the unlabeled histopathology image patches into a discriminative
embedding space where these patches are clustered to identify the tissue
prototypes by efficient pathologists' visual examination. Then, the encoder is
used to map the images into the embedding space and generate pixel-level pseudo
tissue masks by querying the tissue prototype dictionary. Finally, the pseudo
masks are used to train a segmentation network with dense supervision for
better performance. Experiments on two public datasets demonstrate that our
human-machine interactive tissue prototype learning method can achieve
comparable segmentation performance as the fully-supervised baselines with less
annotation burden and outperform other weakly-supervised methods. Codes will be
available upon publication.
Related papers
- Semi-Supervised Semantic Segmentation Based on Pseudo-Labels: A Survey [49.47197748663787]
This review aims to provide a first comprehensive and organized overview of the state-of-the-art research results on pseudo-label methods in the field of semi-supervised semantic segmentation.
In addition, we explore the application of pseudo-label technology in medical and remote-sensing image segmentation.
arXiv Detail & Related papers (2024-03-04T10:18:38Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Unsupervised Dense Nuclei Detection and Segmentation with Prior
Self-activation Map For Histology Images [5.3882963853819845]
We propose a self-supervised learning based approach with a Prior Self-activation Module (PSM)
PSM generates self-activation maps from the input images to avoid labeling costs and further produce pseudo masks for the downstream task.
Compared with other fully-supervised and weakly-supervised methods, our method can achieve competitive performance without any manual annotations.
arXiv Detail & Related papers (2022-10-14T14:34:26Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Learning Whole-Slide Segmentation from Inexact and Incomplete Labels
using Tissue Graphs [11.315178576537768]
We propose SegGini, a weakly supervised semantic segmentation method using graphs.
SegGini segment arbitrary and large images, scaling from tissue microarray (TMA) to whole slide image (WSI)
arXiv Detail & Related papers (2021-03-04T16:04:24Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.