Diffusion-Based Synthetic Brightfield Microscopy Images for Enhanced Single Cell Detection
- URL: http://arxiv.org/abs/2512.00078v1
- Date: Tue, 25 Nov 2025 08:57:23 GMT
- Title: Diffusion-Based Synthetic Brightfield Microscopy Images for Enhanced Single Cell Detection
- Authors: Mario de Jesus da Graca, Jörg Dahlkemper, Peer Stelldinger,
- Abstract summary: We investigate the use of unconditional models to generate synthetic brightfield microscopy images.<n>A U-Net based diffusion model was trained and used to create datasets with varying ratios of synthetic and real images.<n>Experiments with YOLOv8, YOLOv9 and RT-DETR reveal that training with synthetic data can achieve improved detection accuracies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate single cell detection in brightfield microscopy is crucial for biological research, yet data scarcity and annotation bottlenecks limit the progress of deep learning methods. We investigate the use of unconditional models to generate synthetic brightfield microscopy images and evaluate their impact on object detection performance. A U-Net based diffusion model was trained and used to create datasets with varying ratios of synthetic and real images. Experiments with YOLOv8, YOLOv9 and RT-DETR reveal that training with synthetic data can achieve improved detection accuracies (at minimal costs). A human expert survey demonstrates the high realism of generated images, with experts not capable to distinguish them from real microscopy images (accuracy 50%). Our findings suggest that diffusion-based synthetic data generation is a promising avenue for augmenting real datasets in microscopy image analysis, reducing the reliance on extensive manual annotation and potentially improving the robustness of cell detection models.
Related papers
- Compressing Biology: Evaluating the Stable Diffusion VAE for Phenotypic Drug Discovery [0.8594140167290097]
We present the first systematic evaluation of Stable Diffusion's variational autoencoder (SDVAE) for reconstructing Cell Painting images.<n>We find that SDVAE reconstructions preserve phenotypic signals with minimal loss, supporting its use in microscopy.<n>Our findings offer practical guidelines for evaluating generative models on microscopy data and support the use of off-the-shelf models in phenotypic drug discovery.
arXiv Detail & Related papers (2025-10-22T16:16:49Z) - High-Throughput Low-Cost Segmentation of Brightfield Microscopy Live Cell Images [3.175346985850522]
This study focuses on segmenting unstained live cells imaged with bright-field microscopy.<n>We developed a low-cost CNN-based pipeline incorporating comparative analysis of frozen encoders.<n>The model was validated on a public dataset featuring diverse live cell variants.
arXiv Detail & Related papers (2025-08-17T22:05:58Z) - Improved Sub-Visible Particle Classification in Flow Imaging Microscopy via Generative AI-Based Image Synthesis [1.172405562070645]
Sub-visible particle analysis using flow imaging microscopy combined with deep learning has proven effective in identifying particle types.<n>However, the scarcity of available data and severe imbalance between particle types within datasets remain substantial hurdles.<n>We develop a state-of-the-art diffusion model to address data imbalance by generating high-fidelity images that can augment training datasets.
arXiv Detail & Related papers (2025-08-08T05:15:02Z) - Disentangled representations of microscopy images [0.9849635250118911]
This work proposes a Disentangled Representation Learning (DRL) methodology to enhance model interpretability for microscopy image classification.<n>We show how a DRL framework, based on transferring a representation learnt from synthetic data, can provide a good trade-off between accuracy and interpretability in this domain.
arXiv Detail & Related papers (2025-06-25T17:44:37Z) - MaskTerial: A Foundation Model for Automated 2D Material Flake Detection [48.73213960205105]
We present a deep learning model, called MaskTerial, that uses an instance segmentation network to reliably identify 2D material flakes.<n>The model is extensively pre-trained using a synthetic data generator, that generates realistic microscopy images from unlabeled data.<n>We demonstrate significant improvements over existing techniques in the detection of low-contrast materials such as hexagonal boron nitride.
arXiv Detail & Related papers (2024-12-12T15:01:39Z) - Merging synthetic and real embryo data for advanced AI predictions [69.07284335967019]
We train two generative models using two datasets-one we created and made publicly available, and one existing public dataset-to generate synthetic embryo images at various cell stages.<n>These were combined with real images to train classification models for embryo cell stage prediction.<n>Our results demonstrate that incorporating synthetic images alongside real data improved classification performance, with the model achieving 97% accuracy compared to 94.5% when trained solely on real data.
arXiv Detail & Related papers (2024-12-02T08:24:49Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - LUCYD: A Feature-Driven Richardson-Lucy Deconvolution Network [0.31402652384742363]
This paper proposes LUCYD, a novel method for the restoration of volumetric microscopy images.
Lucyd combines the Richardson-Lucy deconvolution formula and the fusion of deep features obtained by a fully convolutional network.
Our experiments indicate that LUCYD can significantly improve resolution, contrast, and overall quality of microscopy images.
arXiv Detail & Related papers (2023-07-16T10:34:23Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.