SPPNet: A Single-Point Prompt Network for Nuclei Image Segmentation
- URL: http://arxiv.org/abs/2308.12231v1
- Date: Wed, 23 Aug 2023 16:13:58 GMT
- Title: SPPNet: A Single-Point Prompt Network for Nuclei Image Segmentation
- Authors: Qing Xu, Wenwei Kuang, Zeyu Zhang, Xueyao Bao, Haoran Chen, Wenting
Duan
- Abstract summary: Single-point prompt network is proposed for nuclei image segmentation.
We replace the original image encoder with a lightweight vision transformer.
The proposed model is evaluated on the MoNuSeg-2018 dataset.
- Score: 6.149725843029721
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image segmentation plays an essential role in nuclei image analysis.
Recently, the segment anything model has made a significant breakthrough in
such tasks. However, the current model exists two major issues for cell
segmentation: (1) the image encoder of the segment anything model involves a
large number of parameters. Retraining or even fine-tuning the model still
requires expensive computational resources. (2) in point prompt mode, points
are sampled from the center of the ground truth and more than one set of points
is expected to achieve reliable performance, which is not efficient for
practical applications. In this paper, a single-point prompt network is
proposed for nuclei image segmentation, called SPPNet. We replace the original
image encoder with a lightweight vision transformer. Also, an effective
convolutional block is added in parallel to extract the low-level semantic
information from the image and compensate for the performance degradation due
to the small image encoder. We propose a new point-sampling method based on the
Gaussian kernel. The proposed model is evaluated on the MoNuSeg-2018 dataset.
The result demonstrated that SPPNet outperforms existing U-shape architectures
and shows faster convergence in training. Compared to the segment anything
model, SPPNet shows roughly 20 times faster inference, with 1/70 parameters and
computational cost. Particularly, only one set of points is required in both
the training and inference phases, which is more reasonable for clinical
applications. The code for our work and more technical details can be found at
https://github.com/xq141839/SPPNet.
Related papers
- Filling Missing Values Matters for Range Image-Based Point Cloud Segmentation [12.62718910894575]
Point cloud segmentation (PCS) plays an essential role in robot perception and navigation tasks.
To efficiently understand large-scale outdoor point clouds, their range image representation is commonly adopted.
However, undesirable missing values in the range images damage the shapes and patterns of objects.
This problem creates difficulty for the models in learning coherent and complete geometric information from the objects.
arXiv Detail & Related papers (2024-05-16T15:13:42Z) - Lidar Annotation Is All You Need [0.0]
This paper aims to improve the efficiency of image segmentation using a convolutional neural network in a multi-sensor setup.
The key innovation of our approach is the masked loss, addressing sparse ground-truth masks from point clouds.
Experimental validation of the approach on benchmark datasets shows comparable performance to a high-quality image segmentation model.
arXiv Detail & Related papers (2023-11-08T15:55:18Z) - Bridging Vision and Language Encoders: Parameter-Efficient Tuning for
Referring Image Segmentation [72.27914940012423]
We do an investigation of efficient tuning problems on referring image segmentation.
We propose a novel adapter called Bridger to facilitate cross-modal information exchange.
We also design a lightweight decoder for image segmentation.
arXiv Detail & Related papers (2023-07-21T12:46:15Z) - Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework [70.18084425770091]
Deep neural networks have been widely applied in nuclei instance segmentation of H&E stained pathology images.
It is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns.
We propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner.
arXiv Detail & Related papers (2022-12-20T14:53:26Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Meta-DRN: Meta-Learning for 1-Shot Image Segmentation [0.12691047660244334]
We propose a novel lightweight CNN architecture for 1-shot image segmentation.
We train our model using 4 meta-learning algorithms that have worked well for image classification and compare the results.
arXiv Detail & Related papers (2020-08-01T11:23:37Z) - DoubleU-Net: A Deep Convolutional Neural Network for Medical Image
Segmentation [1.6416058750198184]
DoubleU-Net is a combination of two U-Net architectures stacked on top of each other.
We have evaluated DoubleU-Net using four medical segmentation datasets.
arXiv Detail & Related papers (2020-06-08T18:38:24Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.