Benchmarking the Cell Image Segmentation Models Robustness under the Microscope Optical Aberrations
- URL: http://arxiv.org/abs/2404.08549v1
- Date: Fri, 12 Apr 2024 15:45:26 GMT
- Title: Benchmarking the Cell Image Segmentation Models Robustness under the Microscope Optical Aberrations
- Authors: Boyuan Peng, Jiaju Chen, Qihui Ye, Minjiang Chen, Peiwu Qin, Chenggang Yan, Dongmei Yu, Zhenglin Chen,
- Abstract summary: This study comprehensively evaluates the performance of cell instance segmentation models under simulated aberration conditions.
Various segmentation models, such as Mask R-CNN with different network heads, were trained and tested under aberrated conditions.
Results indicate that FPN combined with SwinS demonstrates superior robustness in handling simple cell images affected by minor aberrations.
- Score: 15.920475243253765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cell segmentation is essential in biomedical research for analyzing cellular morphology and behavior. Deep learning methods, particularly convolutional neural networks (CNNs), have revolutionized cell segmentation by extracting intricate features from images. However, the robustness of these methods under microscope optical aberrations remains a critical challenge. This study comprehensively evaluates the performance of cell instance segmentation models under simulated aberration conditions using the DynamicNuclearNet (DNN) and LIVECell datasets. Aberrations, including Astigmatism, Coma, Spherical, and Trefoil, were simulated using Zernike polynomial equations. Various segmentation models, such as Mask R-CNN with different network heads (FPN, C3) and backbones (ResNet, VGG19, SwinS), were trained and tested under aberrated conditions. Results indicate that FPN combined with SwinS demonstrates superior robustness in handling simple cell images affected by minor aberrations. Conversely, Cellpose2.0 proves effective for complex cell images under similar conditions. Our findings provide insights into selecting appropriate segmentation models based on cell morphology and aberration severity, enhancing the reliability of cell segmentation in biomedical applications. Further research is warranted to validate these methods with diverse aberration types and emerging segmentation models. Overall, this research aims to guide researchers in effectively utilizing cell segmentation models in the presence of minor optical aberrations.
Related papers
- PixCell: A generative foundation model for digital histopathology images [49.00921097924924]
We introduce PixCell, the first diffusion-based generative foundation model for histopathology.<n>We train PixCell on PanCan-30M, a vast, diverse dataset derived from 69,184 H&E-stained whole slide images covering various cancer types.
arXiv Detail & Related papers (2025-06-05T15:14:32Z) - DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Fluorescent Neuronal Cells v2: Multi-Task, Multi-Format Annotations for
Deep Learning in Microscopy [44.62475518267084]
This dataset encompasses three image collections in which rodent neuronal cells' nuclei and cytoplasm are stained with diverse markers.
Alongside the images, we provide ground-truth annotations for several learning tasks, including semantic segmentation, object detection, and counting.
arXiv Detail & Related papers (2023-07-26T15:14:10Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - Advanced Multi-Microscopic Views Cell Semi-supervised Segmentation [0.0]
Deep learning (DL) shows powerful potential in cell segmentation tasks, but suffers from poor generalization.
In this paper, we introduce a novel semi-supervised cell segmentation method called Multi-Microscopic-view Cell semi-supervised (MMCS)
MMCS can train cell segmentation models utilizing less labeled multi-posture cell images with different microscopy well.
It achieves an F1-score of 0.8239 and the running time for all cases is within the time tolerance.
arXiv Detail & Related papers (2023-03-21T08:08:13Z) - Machine learning based lens-free imaging technique for field-portable
cytometry [0.0]
The performance of our proposed method shows an increase in accuracy >98% along with the signal enhancement of >5 dB for most of the cell types.
The model is adaptive to learn new type of samples within a few learning iterations and able to successfully classify the newly introduced sample.
arXiv Detail & Related papers (2022-03-02T07:09:29Z) - From augmented microscopy to the topological transformer: a new approach
in cell image analysis for Alzheimer's research [0.0]
Cell image analysis is crucial in Alzheimer's research to detect the presence of A$beta$ protein inhibiting cell function.
We first found Unet is most suitable in augmented microscopy by comparing performance in multi-class semantics segmentation.
We develop the augmented microscopy method to capture nuclei in a brightfield image and the transformer using Unet model to convert an input image into a sequence of topological information.
arXiv Detail & Related papers (2021-08-03T16:59:33Z) - Enforcing Morphological Information in Fully Convolutional Networks to
Improve Cell Instance Segmentation in Fluorescence Microscopy Images [1.408123603417833]
We propose a novel cell instance segmentation approach based on the well-known U-Net architecture.
To enforce the learning of morphological information per pixel, a deep distance transformer (DDT) acts as a back-bone model.
The obtained results suggest a performance boost over traditional U-Net architectures.
arXiv Detail & Related papers (2021-06-10T15:54:38Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Learning to segment clustered amoeboid cells from brightfield microscopy
via multi-task learning with adaptive weight selection [6.836162272841265]
We introduce a novel supervised technique for cell segmentation in a multi-task learning paradigm.
A combination of a multi-task loss, based on the region and cell boundary detection, is employed for an improved prediction efficiency of the network.
We observe an overall Dice score of 0.93 on the validation set, which is an improvement of over 15.9% on a recent unsupervised method, and outperforms the popular supervised U-net algorithm by at least $5.8%$ on average.
arXiv Detail & Related papers (2020-05-19T11:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.