High-Throughput Low-Cost Segmentation of Brightfield Microscopy Live Cell Images
- URL: http://arxiv.org/abs/2508.14106v2
- Date: Sat, 23 Aug 2025 10:36:18 GMT
- Title: High-Throughput Low-Cost Segmentation of Brightfield Microscopy Live Cell Images
- Authors: Surajit Das, Gourav Roy, Pavel Zun,
- Abstract summary: This study focuses on segmenting unstained live cells imaged with bright-field microscopy.<n>We developed a low-cost CNN-based pipeline incorporating comparative analysis of frozen encoders.<n>The model was validated on a public dataset featuring diverse live cell variants.
- Score: 3.175346985850522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Live cell culture is crucial in biomedical studies for analyzing cell properties and dynamics in vitro. This study focuses on segmenting unstained live cells imaged with bright-field microscopy. While many segmentation approaches exist for microscopic images, none consistently address the challenges of bright-field live-cell imaging with high throughput, where temporal phenotype changes, low contrast, noise, and motion-induced blur from cellular movement remain major obstacles. We developed a low-cost CNN-based pipeline incorporating comparative analysis of frozen encoders within a unified U-Net architecture enhanced with attention mechanisms, instance-aware systems, adaptive loss functions, hard instance retraining, dynamic learning rates, progressive mechanisms to mitigate overfitting, and an ensemble technique. The model was validated on a public dataset featuring diverse live cell variants, showing consistent competitiveness with state-of-the-art methods, achieving 93% test accuracy and an average F1-score of 89% (std. 0.07) on low-contrast, noisy, and blurry images. Notably, the model was trained primarily on bright-field images with limited exposure to phase- contrast microscopy (<20%), yet it generalized effectively to the phase-contrast LIVECell dataset, demonstrating modality, robustness and strong performance. This highlights its potential for real- world laboratory deployment across imaging conditions. The model requires minimal compute power and is adaptable using basic deep learning setups such as Google Colab, making it practical for training on other cell variants. Our pipeline outperforms existing methods in robustness and precision for bright-field microscopy segmentation. The code and dataset are available for reproducibility 1.
Related papers
- Diffusion-Based Synthetic Brightfield Microscopy Images for Enhanced Single Cell Detection [0.0]
We investigate the use of unconditional models to generate synthetic brightfield microscopy images.<n>A U-Net based diffusion model was trained and used to create datasets with varying ratios of synthetic and real images.<n>Experiments with YOLOv8, YOLOv9 and RT-DETR reveal that training with synthetic data can achieve improved detection accuracies.
arXiv Detail & Related papers (2025-11-25T08:57:23Z) - Adaptive Attention Residual U-Net for curvilinear structure segmentation in fluorescence microscopy and biomedical images [0.0]
We create datasets consisting of hundreds of synthetic images of fluorescently labelled microtubules within cells.<n>These datasets are precisely annotated and closely mimic real microscopy images, including realistic noise.<n>The second dataset presents an additional challenge, simulating varying fluorescence intensities along filaments that complicate segmentation.<n>We develop a novel advanced architecture: the Adaptive Squeeze-and-Excitation Residual U-Net.
arXiv Detail & Related papers (2025-07-10T14:26:50Z) - Self-supervised Representation Learning with Local Aggregation for Image-based Profiling [84.52554180480037]
Image-based cell profiling aims to create informative representations of cell images.<n>Recent developments in non-contrastive Self-Supervised Learning have inspired this paper.<n>We introduce specialized data augmentation and representation post-processing methods tailored to cell images.
arXiv Detail & Related papers (2025-06-17T07:25:57Z) - PixCell: A generative foundation model for digital histopathology images [49.00921097924924]
We introduce PixCell, the first diffusion-based generative foundation model for histopathology.<n>We train PixCell on PanCan-30M, a vast, diverse dataset derived from 69,184 H&E-stained whole slide images covering various cancer types.
arXiv Detail & Related papers (2025-06-05T15:14:32Z) - CellCLIP -- Learning Perturbation Effects in Cell Painting via Text-Guided Contrastive Learning [23.521800791670938]
We introduce CellCLIP, a cross-modal contrastive learning framework for HCS data.<n>Our framework outperforms current open-source models, demonstrating the best performance in both cross-modal retrieval and biologically meaningful downstream tasks.
arXiv Detail & Related papers (2025-05-16T23:07:51Z) - Practical Guidelines for Cell Segmentation Models Under Optical Aberrations in Microscopy [14.042884268397058]
This study evaluates cell image segmentation models under optical aberrations from fluorescence and bright field microscopy.
We train and test several segmentation models, including the Otsu threshold method and Mask R-CNN with different network heads.
In contrast, Cellpose 2.0 proves effective for complex cell images under similar conditions.
arXiv Detail & Related papers (2024-04-12T15:45:26Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - CellCycleGAN: Spatiotemporal Microscopy Image Synthesis of Cell
Populations using Statistical Shape Models and Conditional GANs [0.07117593004982078]
We develop a new method for generation of synthetic 2D+t image data of fluorescently labeled cellular nuclei.
We show the effect of the GAN conditioning and create a set of synthetic images that can be readily used for training cell segmentation and tracking approaches.
arXiv Detail & Related papers (2020-10-22T20:02:41Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Learning to segment clustered amoeboid cells from brightfield microscopy
via multi-task learning with adaptive weight selection [6.836162272841265]
We introduce a novel supervised technique for cell segmentation in a multi-task learning paradigm.
A combination of a multi-task loss, based on the region and cell boundary detection, is employed for an improved prediction efficiency of the network.
We observe an overall Dice score of 0.93 on the validation set, which is an improvement of over 15.9% on a recent unsupervised method, and outperforms the popular supervised U-net algorithm by at least $5.8%$ on average.
arXiv Detail & Related papers (2020-05-19T11:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.