Sequence models for continuous cell cycle stage prediction from brightfield images
- URL: http://arxiv.org/abs/2502.02182v1
- Date: Tue, 04 Feb 2025 09:57:17 GMT
- Title: Sequence models for continuous cell cycle stage prediction from brightfield images
- Authors: Louis-Alexandre Leger, Maxine Leonardi, Andrea Salati, Felix Naef, Martin Weigert,
- Abstract summary: We evaluate deep learning methods for predicting continuous Fucci signals using non-fluorescence brightfield imaging.
We show that both causal and transformer-based models significantly outperform single- and fixed frame approaches.
- Score: 0.0
- License:
- Abstract: Understanding cell cycle dynamics is crucial for studying biological processes such as growth, development and disease progression. While fluorescent protein reporters like the Fucci system allow live monitoring of cell cycle phases, they require genetic engineering and occupy additional fluorescence channels, limiting broader applicability in complex experiments. In this study, we conduct a comprehensive evaluation of deep learning methods for predicting continuous Fucci signals using non-fluorescence brightfield imaging, a widely available label-free modality. To that end, we generated a large dataset of 1.3 M images of dividing RPE1 cells with full cell cycle trajectories to quantitatively compare the predictive performance of distinct model categories including single time-frame models, causal state space models and bidirectional transformer models. We show that both causal and transformer-based models significantly outperform single- and fixed frame approaches, enabling the prediction of visually imperceptible transitions like G1/S within 1h resolution. Our findings underscore the importance of sequence models for accurate predictions of cell cycle dynamics and highlight their potential for label-free imaging.
Related papers
- Interpretable deep learning illuminates multiple structures fluorescence imaging: a path toward trustworthy artificial intelligence in microscopy [10.395551533758358]
We present the Adaptive Explainable Multi-Structure Network (AEMS-Net), a deep-learning framework that enables simultaneous prediction of two subcellular structures from a single image.
We demonstrate that AEMS-Net allows real-time recording of interactions between mitochondria and microtubules, requiring only half the conventional sequential-channel imaging procedures.
arXiv Detail & Related papers (2025-01-09T07:36:28Z) - Generating Multi-Modal and Multi-Attribute Single-Cell Counts with CFGen [76.02070962797794]
We present Cell Flow for Generation, a flow-based conditional generative model for multi-modal single-cell counts.
Our results suggest improved recovery of crucial biological data characteristics while accounting for novel generative tasks.
arXiv Detail & Related papers (2024-07-16T14:05:03Z) - Practical Guidelines for Cell Segmentation Models Under Optical Aberrations in Microscopy [14.042884268397058]
This study evaluates cell image segmentation models under optical aberrations from fluorescence and bright field microscopy.
We train and test several segmentation models, including the Otsu threshold method and Mask R-CNN with different network heads.
In contrast, Cellpose 2.0 proves effective for complex cell images under similar conditions.
arXiv Detail & Related papers (2024-04-12T15:45:26Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - PhagoStat a scalable and interpretable end to end framework for
efficient quantification of cell phagocytosis in neurodegenerative disease
studies [0.0]
We introduce an end-to-end, scalable, and versatile real-time framework for quantifying and analyzing phagocytic activity.
Our proposed pipeline is able to process large data-sets and includes a data quality verification module.
We apply our pipeline to analyze microglial cell phagocytosis in FTD and obtain statistically reliable results.
arXiv Detail & Related papers (2023-04-26T18:10:35Z) - Learning Generative Vision Transformer with Energy-Based Latent Space
for Saliency Prediction [51.80191416661064]
We propose a novel vision transformer with latent variables following an informative energy-based prior for salient object detection.
Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation.
With the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image.
arXiv Detail & Related papers (2021-12-27T06:04:33Z) - Developmental Stage Classification of EmbryosUsing Two-Stream Neural
Network with Linear-Chain Conditional Random Field [74.53314729742966]
We propose a two-stream model for developmental stage classification.
Unlike previous methods, our two-stream model accepts both temporal and image information.
We demonstrate our algorithm on two time-lapse embryo video datasets.
arXiv Detail & Related papers (2021-07-13T19:56:01Z) - Pixel precise unsupervised detection of viral particle proliferation in
cellular imaging data [0.0]
We use computer generated images from a study of experimentally obtained cell imaging data representing viral particle proliferation in host cell monolayers.
In this study viral particle increase in time is simulated by a one-by-one increase, across images, in black or gray single pixels representing dead or partially infected cells, and hypothetical remission by a one-by-one increase in white pixels coding for living cells.
Unsupervised classification by SOM-QE of 160 model images, each with more than three million pixels, is shown to provide a statistically reliable, pixel precise, and fast classification model.
arXiv Detail & Related papers (2020-11-10T16:06:03Z) - CellCycleGAN: Spatiotemporal Microscopy Image Synthesis of Cell
Populations using Statistical Shape Models and Conditional GANs [0.07117593004982078]
We develop a new method for generation of synthetic 2D+t image data of fluorescently labeled cellular nuclei.
We show the effect of the GAN conditioning and create a set of synthetic images that can be readily used for training cell segmentation and tracking approaches.
arXiv Detail & Related papers (2020-10-22T20:02:41Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.