CellCycleGAN: Spatiotemporal Microscopy Image Synthesis of Cell
Populations using Statistical Shape Models and Conditional GANs
- URL: http://arxiv.org/abs/2010.12011v2
- Date: Tue, 26 Jan 2021 19:10:28 GMT
- Title: CellCycleGAN: Spatiotemporal Microscopy Image Synthesis of Cell
Populations using Statistical Shape Models and Conditional GANs
- Authors: Dennis B\"ahr, Dennis Eschweiler, Anuk Bhattacharyya, Daniel
Moreno-Andr\'es, Wolfram Antonin and Johannes Stegmaier
- Abstract summary: We develop a new method for generation of synthetic 2D+t image data of fluorescently labeled cellular nuclei.
We show the effect of the GAN conditioning and create a set of synthetic images that can be readily used for training cell segmentation and tracking approaches.
- Score: 0.07117593004982078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic analysis of spatio-temporal microscopy images is inevitable for
state-of-the-art research in the life sciences. Recent developments in deep
learning provide powerful tools for automatic analyses of such image data, but
heavily depend on the amount and quality of provided training data to perform
well. To this end, we developed a new method for realistic generation of
synthetic 2D+t microscopy image data of fluorescently labeled cellular nuclei.
The method combines spatiotemporal statistical shape models of different cell
cycle stages with a conditional GAN to generate time series of cell populations
and provides instance-level control of cell cycle stage and the fluorescence
intensity of generated cells. We show the effect of the GAN conditioning and
create a set of synthetic images that can be readily used for training and
benchmarking of cell segmentation and tracking approaches.
Related papers
- Practical Guidelines for Cell Segmentation Models Under Optical Aberrations in Microscopy [14.042884268397058]
This study evaluates cell image segmentation models under optical aberrations from fluorescence and bright field microscopy.
We train and test several segmentation models, including the Otsu threshold method and Mask R-CNN with different network heads.
In contrast, Cellpose 2.0 proves effective for complex cell images under similar conditions.
arXiv Detail & Related papers (2024-04-12T15:45:26Z) - Nondestructive, quantitative viability analysis of 3D tissue cultures
using machine learning image segmentation [0.0]
We demonstrate an image processing algorithm for quantifying cellular viability in 3D cultures without the need for assay-based indicators.
We show that our algorithm performs similarly to a pair of human experts in whole-well images over a range of days and culture matrix compositions.
arXiv Detail & Related papers (2023-11-15T20:28:31Z) - Mixed Models with Multiple Instance Learning [51.440557223100164]
We introduce MixMIL, a framework integrating Generalized Linear Mixed Models (GLMM) and Multiple Instance Learning (MIL)
Our empirical results reveal that MixMIL outperforms existing MIL models in single-cell datasets.
arXiv Detail & Related papers (2023-11-04T16:42:42Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Machine learning based lens-free imaging technique for field-portable
cytometry [0.0]
The performance of our proposed method shows an increase in accuracy >98% along with the signal enhancement of >5 dB for most of the cell types.
The model is adaptive to learn new type of samples within a few learning iterations and able to successfully classify the newly introduced sample.
arXiv Detail & Related papers (2022-03-02T07:09:29Z) - Search for temporal cell segmentation robustness in phase-contrast
microscopy videos [31.92922565397439]
In this work, we present a deep learning-based workflow to segment cancer cells embedded in 3D collagen matrices.
We also propose a geometrical-characterization approach to studying cancer cell morphology.
We introduce a new annotated dataset for 2D cell segmentation and tracking, and an open-source implementation to replicate the experiments or adapt them to new image processing problems.
arXiv Detail & Related papers (2021-12-16T12:03:28Z) - Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell
Microscopy [23.720106678247888]
We propose Multi-StyleGAN as a descriptive approach to simulate time-lapse fluorescence microscopy imagery of living cells.
This novel generative adversarial network synthesises a multi-domain sequence of consecutive timesteps.
The simulation captures underlying biophysical factors and time dependencies, such as cell morphology, growth, physical interactions, as well as the intensity of a fluorescent reporter protein.
arXiv Detail & Related papers (2021-06-15T16:51:16Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.