CellMixer: Annotation-free Semantic Cell Segmentation of Heterogeneous
Cell Populations
- URL: http://arxiv.org/abs/2312.00671v1
- Date: Fri, 1 Dec 2023 15:50:20 GMT
- Title: CellMixer: Annotation-free Semantic Cell Segmentation of Heterogeneous
Cell Populations
- Authors: Mehdi Naouar, Gabriel Kalweit, Anusha Klett, Yannick Vogt, Paula
Silvestrini, Diana Laura Infante Ramirez, Roland Mertelsmann, Joschka
Boedecker, Maria Kalweit
- Abstract summary: We present CellMixer, an innovative annotation-free approach for the semantic segmentation of heterogeneous cell populations.
Our results show that CellMixer can achieve competitive segmentation performance across multiple cell types and imaging modalities.
- Score: 9.335273591976648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, several unsupervised cell segmentation methods have been
presented, trying to omit the requirement of laborious pixel-level annotations
for the training of a cell segmentation model. Most if not all of these methods
handle the instance segmentation task by focusing on the detection of different
cell instances ignoring their type. While such models prove adequate for
certain tasks, like cell counting, other applications require the
identification of each cell's type. In this paper, we present CellMixer, an
innovative annotation-free approach for the semantic segmentation of
heterogeneous cell populations. Our augmentation-based method enables the
training of a segmentation model from image-level labels of homogeneous cell
populations. Our results show that CellMixer can achieve competitive
segmentation performance across multiple cell types and imaging modalities,
demonstrating the method's scalability and potential for broader applications
in medical imaging, cellular biology, and diagnostics.
Related papers
- Interpretable Embeddings for Segmentation-Free Single-Cell Analysis in Multiplex Imaging [1.8687965482996822]
Multiplex Imaging (MI) enables the simultaneous visualization of multiple biological markers in separate imaging channels at subcellular resolution.
We propose a segmentation-free deep learning approach that leverages grouped convolutions to learn interpretable embedded features from each imaging channel.
arXiv Detail & Related papers (2024-11-02T11:21:33Z) - UniCell: Universal Cell Nucleus Classification via Prompt Learning [76.11864242047074]
We propose a universal cell nucleus classification framework (UniCell)
It employs a novel prompt learning mechanism to uniformly predict the corresponding categories of pathological images from different dataset domains.
In particular, our framework adopts an end-to-end architecture for nuclei detection and classification, and utilizes flexible prediction heads for adapting various datasets.
arXiv Detail & Related papers (2024-02-20T11:50:27Z) - Single-Cell Deep Clustering Method Assisted by Exogenous Gene
Information: A Novel Approach to Identifying Cell Types [50.55583697209676]
We develop an attention-enhanced graph autoencoder, which is designed to efficiently capture the topological features between cells.
During the clustering process, we integrated both sets of information and reconstructed the features of both cells and genes to generate a discriminative representation.
This research offers enhanced insights into the characteristics and distribution of cells, thereby laying the groundwork for early diagnosis and treatment of diseases.
arXiv Detail & Related papers (2023-11-28T09:14:55Z) - Single-cell Multi-view Clustering via Community Detection with Unknown
Number of Clusters [64.31109141089598]
We introduce scUNC, an innovative multi-view clustering approach tailored for single-cell data.
scUNC seamlessly integrates information from different views without the need for a predefined number of clusters.
We conducted a comprehensive evaluation of scUNC using three distinct single-cell datasets.
arXiv Detail & Related papers (2023-11-28T08:34:58Z) - Mixed Models with Multiple Instance Learning [51.440557223100164]
We introduce MixMIL, a framework integrating Generalized Linear Mixed Models (GLMM) and Multiple Instance Learning (MIL)
Our empirical results reveal that MixMIL outperforms existing MIL models in single-cell datasets.
arXiv Detail & Related papers (2023-11-04T16:42:42Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Advanced Multi-Microscopic Views Cell Semi-supervised Segmentation [0.0]
Deep learning (DL) shows powerful potential in cell segmentation tasks, but suffers from poor generalization.
In this paper, we introduce a novel semi-supervised cell segmentation method called Multi-Microscopic-view Cell semi-supervised (MMCS)
MMCS can train cell segmentation models utilizing less labeled multi-posture cell images with different microscopy well.
It achieves an F1-score of 0.8239 and the running time for all cases is within the time tolerance.
arXiv Detail & Related papers (2023-03-21T08:08:13Z) - CCRL: Contrastive Cell Representation Learning [0.0]
We propose Contrastive Cell Representation Learning (CCRL) model for cell identification in H&E slides.
We show that this model can outperform all currently available cell clustering models by a large margin across two datasets from different tissue types.
arXiv Detail & Related papers (2022-08-12T18:12:03Z) - Split and Expand: An inference-time improvement for Weakly Supervised
Cell Instance Segmentation [71.50526869670716]
We propose a two-step post-processing procedure, Split and Expand, to improve the conversion of segmentation maps to instances.
In the Split step, we split clumps of cells from the segmentation map into individual cell instances with the guidance of cell-center predictions.
In the Expand step, we find missing small cells using the cell-center predictions.
arXiv Detail & Related papers (2020-07-21T14:05:09Z) - Learning to segment clustered amoeboid cells from brightfield microscopy
via multi-task learning with adaptive weight selection [6.836162272841265]
We introduce a novel supervised technique for cell segmentation in a multi-task learning paradigm.
A combination of a multi-task loss, based on the region and cell boundary detection, is employed for an improved prediction efficiency of the network.
We observe an overall Dice score of 0.93 on the validation set, which is an improvement of over 15.9% on a recent unsupervised method, and outperforms the popular supervised U-net algorithm by at least $5.8%$ on average.
arXiv Detail & Related papers (2020-05-19T11:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.