MONET -- Virtual Cell Painting of Brightfield Images and Time Lapses Using Reference Consistent Diffusion
- URL: http://arxiv.org/abs/2512.11928v1
- Date: Fri, 12 Dec 2025 01:01:34 GMT
- Title: MONET -- Virtual Cell Painting of Brightfield Images and Time Lapses Using Reference Consistent Diffusion
- Authors: Alexander Peysakhovich, William Berman, Joseph Rufo, Felix Wong, Maxwell Z. Wilson,
- Abstract summary: Cell painting is a popular technique for creating human-interpretable images of cell morphology.<n>There are two major issues with cell paint: it is labor-intensive and it requires chemical fixation.<n>We train a diffusion model on a large dataset to predict cell paint channels from brightfield images.
- Score: 37.62160903348546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cell painting is a popular technique for creating human-interpretable, high-contrast images of cell morphology. There are two major issues with cell paint: (1) it is labor-intensive and (2) it requires chemical fixation, making the study of cell dynamics impossible. We train a diffusion model (Morphological Observation Neural Enhancement Tool, or MONET) on a large dataset to predict cell paint channels from brightfield images. We show that model quality improves with scale. The model uses a consistency architecture to generate time-lapse videos, despite the impossibility of obtaining cell paint video training data. In addition, we show that this architecture enables a form of in-context learning, allowing the model to partially transfer to out-of-distribution cell lines and imaging protocols. Virtual cell painting is not intended to replace physical cell painting completely, but to act as a complementary tool enabling novel workflows in biological research.
Related papers
- MorphGen: Controllable and Morphologically Plausible Generative Cell-Imaging [31.990445585569688]
MorphGen is a state-of-the-art diffusion-based generative model for fluorescent microscopy.<n>It generates the complete set of fluorescent channels jointly, preserving per-organelle structures.<n>MorphGen attains an FID score over 35% lower than the prior state-of-the-art MorphoDiff.
arXiv Detail & Related papers (2025-10-01T13:34:29Z) - Self-supervised Representation Learning with Local Aggregation for Image-based Profiling [84.52554180480037]
Image-based cell profiling aims to create informative representations of cell images.<n>Recent developments in non-contrastive Self-Supervised Learning have inspired this paper.<n>We introduce specialized data augmentation and representation post-processing methods tailored to cell images.
arXiv Detail & Related papers (2025-06-17T07:25:57Z) - PixCell: A generative foundation model for digital histopathology images [49.00921097924924]
We introduce PixCell, the first diffusion-based generative foundation model for histopathology.<n>We train PixCell on PanCan-30M, a vast, diverse dataset derived from 69,184 H&E-stained whole slide images covering various cancer types.
arXiv Detail & Related papers (2025-06-05T15:14:32Z) - Adapting Video Diffusion Models for Time-Lapse Microscopy [45.21395064529522]
We present a domain adaptation of video diffusion models to generate time-lapse microscopy videos of cell division in HeLa cells.<n>We fine-tune a pretrained video diffusion model on microscopy-specific sequences, exploring three conditioning strategies.<n>Results demonstrate the potential for domain-specific fine-tuning of generative video models to produce biologically plausible synthetic microscopy data.
arXiv Detail & Related papers (2025-03-24T11:41:21Z) - Denoising Diffusion Probabilistic Models for Image Inpainting of Cell
Distributions in the Human Brain [0.0]
We propose a denoising diffusion probabilistic model (DDPM) trained on light-microscopic scans of cell-body stained sections.
We show that our trained DDPM is able to generate highly realistic image information for this purpose, generating plausible cell statistics and cytoarchitectonic patterns.
arXiv Detail & Related papers (2023-11-28T14:34:04Z) - BiomedJourney: Counterfactual Biomedical Image Generation by
Instruction-Learning from Multimodal Patient Journeys [99.7082441544384]
We present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning.
We use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression.
The resulting triples are then used to train a latent diffusion model for counterfactual biomedical image generation.
arXiv Detail & Related papers (2023-10-16T18:59:31Z) - Self-supervised pseudo-colorizing of masked cells [18.843372840624077]
We introduce a novel self-supervision objective for the analysis of cells in biomedical microscopy images.
We propose training deep learning models to pseudo-colorize masked cells.
Our experiments reveal that approximating semantic segmentation by pseudo-colorization is beneficial for subsequent fine-tuning on cell detection.
arXiv Detail & Related papers (2023-02-12T18:16:51Z) - CellCycleGAN: Spatiotemporal Microscopy Image Synthesis of Cell
Populations using Statistical Shape Models and Conditional GANs [0.07117593004982078]
We develop a new method for generation of synthetic 2D+t image data of fluorescently labeled cellular nuclei.
We show the effect of the GAN conditioning and create a set of synthetic images that can be readily used for training cell segmentation and tracking approaches.
arXiv Detail & Related papers (2020-10-22T20:02:41Z) - Neural Cellular Automata Manifold [84.08170531451006]
We show that the neural network architecture of the Neural Cellular Automata can be encapsulated in a larger NN.
This allows us to propose a new model that encodes a manifold of NCA, each of them capable of generating a distinct image.
In biological terms, our approach would play the role of the transcription factors, modulating the mapping of genes into specific proteins that drive cellular differentiation.
arXiv Detail & Related papers (2020-06-22T11:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.