CUPre: Cross-domain Unsupervised Pre-training for Few-Shot Cell
Segmentation
- URL: http://arxiv.org/abs/2310.03981v1
- Date: Fri, 6 Oct 2023 02:35:31 GMT
- Title: CUPre: Cross-domain Unsupervised Pre-training for Few-Shot Cell
Segmentation
- Authors: Weibin Liao and Xuhong Li and Qingzhong Wang and Yanwu Xu and
Zhaozheng Yin and Haoyi Xiong
- Abstract summary: This work considers the problem of pre-training models for few-shot cell segmentation, where massive unlabeled cell images are available but only a small proportion is annotated.
We propose Cross-domain Unsupervised Pre-training, namely CUPre, transferring the capability of object detection and instance segmentation for common visual objects to the visual domain of cells using unlabeled images.
Experiment shows that CUPre can outperform existing pre-training methods, achieving the highest average precision (AP) for few-shot cell segmentation and detection.
- Score: 36.52664417716791
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While pre-training on object detection tasks, such as Common Objects in
Contexts (COCO) [1], could significantly boost the performance of cell
segmentation, it still consumes on massive fine-annotated cell images [2] with
bounding boxes, masks, and cell types for every cell in every image, to
fine-tune the pre-trained model. To lower the cost of annotation, this work
considers the problem of pre-training DNN models for few-shot cell
segmentation, where massive unlabeled cell images are available but only a
small proportion is annotated. Hereby, we propose Cross-domain Unsupervised
Pre-training, namely CUPre, transferring the capability of object detection and
instance segmentation for common visual objects (learned from COCO) to the
visual domain of cells using unlabeled images. Given a standard COCO
pre-trained network with backbone, neck, and head modules, CUPre adopts an
alternate multi-task pre-training (AMT2) procedure with two sub-tasks -- in
every iteration of pre-training, AMT2 first trains the backbone with cell
images from multiple cell datasets via unsupervised momentum contrastive
learning (MoCo) [3], and then trains the whole model with vanilla COCO datasets
via instance segmentation. After pre-training, CUPre fine-tunes the whole model
on the cell segmentation task using a few annotated images. We carry out
extensive experiments to evaluate CUPre using LIVECell [2] and BBBC038 [4]
datasets in few-shot instance segmentation settings. The experiment shows that
CUPre can outperform existing pre-training methods, achieving the highest
average precision (AP) for few-shot cell segmentation and detection.
Related papers
- Single-cell Multi-view Clustering via Community Detection with Unknown
Number of Clusters [64.31109141089598]
We introduce scUNC, an innovative multi-view clustering approach tailored for single-cell data.
scUNC seamlessly integrates information from different views without the need for a predefined number of clusters.
We conducted a comprehensive evaluation of scUNC using three distinct single-cell datasets.
arXiv Detail & Related papers (2023-11-28T08:34:58Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Look in Different Views: Multi-Scheme Regression Guided Cell Instance
Segmentation [17.633542802081827]
We propose a novel cell instance segmentation network based on multi-scheme regression guidance.
With multi-scheme regression guidance, the network has the ability to look each cell in different views.
We conduct extensive experiments on benchmark datasets, DSB2018, CA2.5 and SCIS.
arXiv Detail & Related papers (2022-08-17T05:24:59Z) - Edge-Based Self-Supervision for Semi-Supervised Few-Shot Microscopy
Image Cell Segmentation [16.94384366469512]
We propose the prediction of edge-based maps for self-supervising the training of the unlabelled images.
In our experiments, we show that only a small number of annotated images, e.g. 10% of the original training set, is enough for our approach to reach similar performance as with the fully annotated databases on 1- to 10-shots.
arXiv Detail & Related papers (2022-08-03T14:35:00Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - Classification Beats Regression: Counting of Cells from Greyscale
Microscopic Images based on Annotation-free Training Samples [20.91256120719461]
This work proposes a supervised learning framework to count cells from greyscale microscopic images without using annotated training images.
We formulate the cell counting task as an image classification problem, where the cell counts are taken as class labels.
To deal with these limitations, we propose a simple but effective data augmentation (DA) method to synthesize images for the unseen cell counts.
arXiv Detail & Related papers (2020-10-28T06:19:30Z) - Split and Expand: An inference-time improvement for Weakly Supervised
Cell Instance Segmentation [71.50526869670716]
We propose a two-step post-processing procedure, Split and Expand, to improve the conversion of segmentation maps to instances.
In the Split step, we split clumps of cells from the segmentation map into individual cell instances with the guidance of cell-center predictions.
In the Expand step, we find missing small cells using the cell-center predictions.
arXiv Detail & Related papers (2020-07-21T14:05:09Z) - Cell Segmentation and Tracking using CNN-Based Distance Predictions and
a Graph-Based Matching Strategy [0.20999222360659608]
We present a method for the segmentation of touching cells in microscopy images.
By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process.
This representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types.
arXiv Detail & Related papers (2020-04-03T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.