Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell
Microscopy
- URL: http://arxiv.org/abs/2106.08285v1
- Date: Tue, 15 Jun 2021 16:51:16 GMT
- Title: Multi-StyleGAN: Towards Image-Based Simulation of Time-Lapse Live-Cell
Microscopy
- Authors: Tim Prangemeier, Christoph Reich, Christian Wildner and Heinz Koeppl
- Abstract summary: We propose Multi-StyleGAN as a descriptive approach to simulate time-lapse fluorescence microscopy imagery of living cells.
This novel generative adversarial network synthesises a multi-domain sequence of consecutive timesteps.
The simulation captures underlying biophysical factors and time dependencies, such as cell morphology, growth, physical interactions, as well as the intensity of a fluorescent reporter protein.
- Score: 23.720106678247888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time-lapse fluorescent microscopy (TLFM) combined with predictive
mathematical modelling is a powerful tool to study the inherently dynamic
processes of life on the single-cell level. Such experiments are costly,
complex and labour intensive. A complimentary approach and a step towards
completely in silico experiments, is to synthesise the imagery itself. Here, we
propose Multi-StyleGAN as a descriptive approach to simulate time-lapse
fluorescence microscopy imagery of living cells, based on a past experiment.
This novel generative adversarial network synthesises a multi-domain sequence
of consecutive timesteps. We showcase Multi-StyleGAN on imagery of multiple
live yeast cells in microstructured environments and train on a dataset
recorded in our laboratory. The simulation captures underlying biophysical
factors and time dependencies, such as cell morphology, growth, physical
interactions, as well as the intensity of a fluorescent reporter protein. An
immediate application is to generate additional training and validation data
for feature extraction algorithms or to aid and expedite development of
advanced experimental techniques such as online monitoring or control of cells.
Code and dataset is available at
https://git.rwth-aachen.de/bcs/projects/tp/multi-stylegan.
Related papers
- Generating Multi-Modal and Multi-Attribute Single-Cell Counts with CFGen [76.02070962797794]
We present Cell Flow for Generation, a flow-based conditional generative model for multi-modal single-cell counts.
Our results suggest improved recovery of crucial biological data characteristics while accounting for novel generative tasks.
arXiv Detail & Related papers (2024-07-16T14:05:03Z) - Practical Guidelines for Cell Segmentation Models Under Optical Aberrations in Microscopy [14.042884268397058]
This study evaluates cell image segmentation models under optical aberrations from fluorescence and bright field microscopy.
We train and test several segmentation models, including the Otsu threshold method and Mask R-CNN with different network heads.
In contrast, Cellpose 2.0 proves effective for complex cell images under similar conditions.
arXiv Detail & Related papers (2024-04-12T15:45:26Z) - RigLSTM: Recurrent Independent Grid LSTM for Generalizable Sequence
Learning [75.61681328968714]
We propose recurrent independent Grid LSTM (RigLSTM) to exploit the underlying modular structure of the target task.
Our model adopts cell selection, input feature selection, hidden state selection, and soft state updating to achieve a better generalization ability.
arXiv Detail & Related papers (2023-11-03T07:40:06Z) - SynopSet: Multiscale Visual Abstraction Set for Explanatory Analysis of
DNA Nanotechnology Simulations [60.05887213349294]
We propose a new abstraction set (SynopSet) that has a continuum of visual representations for the explanatory analysis of molecular dynamics simulations (MDS) in the DNA nanotechnology domain.
This set is also designed to be capable of showing all spatial and temporal details, and all structural complexity.
We have shown that our set of representations can be systematically located in a visualization space, dubbed SynopSpace.
arXiv Detail & Related papers (2022-04-18T06:53:52Z) - Search for temporal cell segmentation robustness in phase-contrast
microscopy videos [31.92922565397439]
In this work, we present a deep learning-based workflow to segment cancer cells embedded in 3D collagen matrices.
We also propose a geometrical-characterization approach to studying cancer cell morphology.
We introduce a new annotated dataset for 2D cell segmentation and tracking, and an open-source implementation to replicate the experiments or adapt them to new image processing problems.
arXiv Detail & Related papers (2021-12-16T12:03:28Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Generation and Simulation of Yeast Microscopy Imagery with Deep Learning [0.0]
Time-lapse fluorescence microscopy (TLFM) is an important tool in synthetic biological research.
This thesis is a study towards deep learning-based modeling of TLFM experiments on the image level.
arXiv Detail & Related papers (2021-03-22T13:30:24Z) - CellCycleGAN: Spatiotemporal Microscopy Image Synthesis of Cell
Populations using Statistical Shape Models and Conditional GANs [0.07117593004982078]
We develop a new method for generation of synthetic 2D+t image data of fluorescently labeled cellular nuclei.
We show the effect of the GAN conditioning and create a set of synthetic images that can be readily used for training cell segmentation and tracking approaches.
arXiv Detail & Related papers (2020-10-22T20:02:41Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Data-Driven Discovery of Molecular Photoswitches with Multioutput
Gaussian Processes [51.17758371472664]
Photoswitchable molecules display two or more isomeric forms that may be accessed using light.
We present a data-driven discovery pipeline for molecular photoswitches underpinned by dataset curation and multitask learning.
We validate our proposed approach experimentally by screening a library of commercially available photoswitchable molecules.
arXiv Detail & Related papers (2020-06-28T20:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.