CellSeg1: Robust Cell Segmentation with One Training Image
- URL: http://arxiv.org/abs/2412.01410v1
- Date: Mon, 02 Dec 2024 11:55:22 GMT
- Title: CellSeg1: Robust Cell Segmentation with One Training Image
- Authors: Peilin Zhou, Bo Du, Yongchao Xu,
- Abstract summary: We introduce CellSeg1, a solution for segmenting cells of arbitrary morphology and modality with a few dozen cell annotations in 1 image.
Tested on 19 diverse cell datasets, CellSeg1 trained on 1 image achieved 0.81 average mAP at 0.5 IoU, performing comparably to existing models trained on over 500 images.
- Score: 37.60000299559688
- License:
- Abstract: Recent trends in cell segmentation have shifted towards universal models to handle diverse cell morphologies and imaging modalities. However, for continuously emerging cell types and imaging techniques, these models still require hundreds or thousands of annotated cells for fine-tuning. We introduce CellSeg1, a practical solution for segmenting cells of arbitrary morphology and modality with a few dozen cell annotations in 1 image. By adopting Low-Rank Adaptation of the Segment Anything Model (SAM), we achieve robust cell segmentation. Tested on 19 diverse cell datasets, CellSeg1 trained on 1 image achieved 0.81 average mAP at 0.5 IoU, performing comparably to existing models trained on over 500 images. It also demonstrated superior generalization in cross-dataset tests on TissueNet. We found that high-quality annotation of a few dozen densely packed cells of varied sizes is key to effective segmentation. CellSeg1 provides an efficient solution for cell segmentation with minimal annotation effort.
Related papers
- SoftCTM: Cell detection by soft instance segmentation and consideration
of cell-tissue interaction [0.0]
We investigate the impact of ground truth formats on the models performance.
Cell-tissue interactions are considered by providing tissue segmentation predictions.
We find that a "soft", probability-map instance segmentation ground truth leads to best model performance.
arXiv Detail & Related papers (2023-12-19T13:33:59Z) - CellMixer: Annotation-free Semantic Cell Segmentation of Heterogeneous
Cell Populations [9.335273591976648]
We present CellMixer, an innovative annotation-free approach for the semantic segmentation of heterogeneous cell populations.
Our results show that CellMixer can achieve competitive segmentation performance across multiple cell types and imaging modalities.
arXiv Detail & Related papers (2023-12-01T15:50:20Z) - Single-cell Multi-view Clustering via Community Detection with Unknown
Number of Clusters [64.31109141089598]
We introduce scUNC, an innovative multi-view clustering approach tailored for single-cell data.
scUNC seamlessly integrates information from different views without the need for a predefined number of clusters.
We conducted a comprehensive evaluation of scUNC using three distinct single-cell datasets.
arXiv Detail & Related papers (2023-11-28T08:34:58Z) - Mixed Models with Multiple Instance Learning [51.440557223100164]
We introduce MixMIL, a framework integrating Generalized Linear Mixed Models (GLMM) and Multiple Instance Learning (MIL)
Our empirical results reveal that MixMIL outperforms existing MIL models in single-cell datasets.
arXiv Detail & Related papers (2023-11-04T16:42:42Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Advanced Multi-Microscopic Views Cell Semi-supervised Segmentation [0.0]
Deep learning (DL) shows powerful potential in cell segmentation tasks, but suffers from poor generalization.
In this paper, we introduce a novel semi-supervised cell segmentation method called Multi-Microscopic-view Cell semi-supervised (MMCS)
MMCS can train cell segmentation models utilizing less labeled multi-posture cell images with different microscopy well.
It achieves an F1-score of 0.8239 and the running time for all cases is within the time tolerance.
arXiv Detail & Related papers (2023-03-21T08:08:13Z) - Split and Expand: An inference-time improvement for Weakly Supervised
Cell Instance Segmentation [71.50526869670716]
We propose a two-step post-processing procedure, Split and Expand, to improve the conversion of segmentation maps to instances.
In the Split step, we split clumps of cells from the segmentation map into individual cell instances with the guidance of cell-center predictions.
In the Expand step, we find missing small cells using the cell-center predictions.
arXiv Detail & Related papers (2020-07-21T14:05:09Z) - Learning to segment clustered amoeboid cells from brightfield microscopy
via multi-task learning with adaptive weight selection [6.836162272841265]
We introduce a novel supervised technique for cell segmentation in a multi-task learning paradigm.
A combination of a multi-task loss, based on the region and cell boundary detection, is employed for an improved prediction efficiency of the network.
We observe an overall Dice score of 0.93 on the validation set, which is an improvement of over 15.9% on a recent unsupervised method, and outperforms the popular supervised U-net algorithm by at least $5.8%$ on average.
arXiv Detail & Related papers (2020-05-19T11:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.