Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework
- URL: http://arxiv.org/abs/2212.10305v1
- Date: Tue, 20 Dec 2022 14:53:26 GMT
- Title: Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework
- Authors: Wei Lou, Haofeng Li, Guanbin Li, Xiaoguang Han, Xiang Wan
- Abstract summary: Deep neural networks have been widely applied in nuclei instance segmentation of H&E stained pathology images.
It is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns.
We propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner.
- Score: 70.18084425770091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently deep neural networks, which require a large amount of annotated
samples, have been widely applied in nuclei instance segmentation of H\&E
stained pathology images. However, it is inefficient and unnecessary to label
all pixels for a dataset of nuclei images which usually contain similar and
redundant patterns. Although unsupervised and semi-supervised learning methods
have been studied for nuclei segmentation, very few works have delved into the
selective labeling of samples to reduce the workload of annotation. Thus, in
this paper, we propose a novel full nuclei segmentation framework that chooses
only a few image patches to be annotated, augments the training set from the
selected samples, and achieves nuclei segmentation in a semi-supervised manner.
In the proposed framework, we first develop a novel consistency-based patch
selection method to determine which image patches are the most beneficial to
the training. Then we introduce a conditional single-image GAN with a
component-wise discriminator, to synthesize more training samples. Lastly, our
proposed framework trains an existing segmentation model with the above
augmented samples. The experimental results show that our proposed method could
obtain the same-level performance as a fully-supervised baseline by annotating
less than 5% pixels on some benchmarks.
Related papers
- Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - Cyclic Learning: Bridging Image-level Labels and Nuclei Instance
Segmentation [19.526504045149895]
We propose a novel image-level weakly supervised method, called cyclic learning, to solve this problem.
Cyclic learning comprises a front-end classification task and a back-end semi-supervised instance segmentation task.
Experiments on three datasets demonstrate the good generality of our method, which outperforms other image-level weakly supervised methods for nuclei instance segmentation.
arXiv Detail & Related papers (2023-06-05T08:32:12Z) - Learning to Annotate Part Segmentation with Gradient Matching [58.100715754135685]
This paper focuses on tackling semi-supervised part segmentation tasks by generating high-quality images with a pre-trained GAN.
In particular, we formulate the annotator learning as a learning-to-learn problem.
We show that our method can learn annotators from a broad range of labelled images including real images, generated images, and even analytically rendered images.
arXiv Detail & Related papers (2022-11-06T01:29:22Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - Weakly Supervised Deep Nuclei Segmentation Using Partial Points
Annotation in Histopathology Images [51.893494939675314]
We propose a novel weakly supervised segmentation framework based on partial points annotation.
We show that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods.
arXiv Detail & Related papers (2020-07-10T15:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.