Generative modeling of living cells with SO(3)-equivariant implicit
neural representations
- URL: http://arxiv.org/abs/2304.08960v2
- Date: Thu, 12 Oct 2023 20:08:52 GMT
- Title: Generative modeling of living cells with SO(3)-equivariant implicit
neural representations
- Authors: David Wiesner, Julian Suk, Sven Dummer, Tereza Ne\v{c}asov\'a,
Vladim\'ir Ulman, David Svoboda, Jelmer M. Wolterink
- Abstract summary: We propose to represent living cell shapes as level sets of signed distance functions (SDFs) which are estimated by neural networks.
We optimize a fully-connected neural network to provide an implicit representation of the SDF value at any point in a 3D+time domain.
We demonstrate the effectiveness of this approach on cells that exhibit rapid deformations (Platynereis dumerilii), cells that grow and divide (C. elegans), and cells that have growing and branching filopodial protrusions (A549 human lung carcinoma cells)
- Score: 2.146287726016005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven cell tracking and segmentation methods in biomedical imaging
require diverse and information-rich training data. In cases where the number
of training samples is limited, synthetic computer-generated data sets can be
used to improve these methods. This requires the synthesis of cell shapes as
well as corresponding microscopy images using generative models. To synthesize
realistic living cell shapes, the shape representation used by the generative
model should be able to accurately represent fine details and changes in
topology, which are common in cells. These requirements are not met by 3D voxel
masks, which are restricted in resolution, and polygon meshes, which do not
easily model processes like cell growth and mitosis. In this work, we propose
to represent living cell shapes as level sets of signed distance functions
(SDFs) which are estimated by neural networks. We optimize a fully-connected
neural network to provide an implicit representation of the SDF value at any
point in a 3D+time domain, conditioned on a learned latent code that is
disentangled from the rotation of the cell shape. We demonstrate the
effectiveness of this approach on cells that exhibit rapid deformations
(Platynereis dumerilii), cells that grow and divide (C. elegans), and cells
that have growing and branching filopodial protrusions (A549 human lung
carcinoma cells). A quantitative evaluation using shape features and Dice
similarity coefficients of real and synthetic cell shapes shows that our model
can generate topologically plausible complex cell shapes in 3D+time with high
similarity to real living cell shapes. Finally, we show how microscopy images
of living cells that correspond to our generated cell shapes can be synthesized
using an image-to-image model.
Related papers
- Practical Guidelines for Cell Segmentation Models Under Optical Aberrations in Microscopy [14.042884268397058]
This study evaluates cell image segmentation models under optical aberrations from fluorescence and bright field microscopy.
We train and test several segmentation models, including the Otsu threshold method and Mask R-CNN with different network heads.
In contrast, Cellpose 2.0 proves effective for complex cell images under similar conditions.
arXiv Detail & Related papers (2024-04-12T15:45:26Z) - 3D Mitochondria Instance Segmentation with Spatio-Temporal Transformers [101.44668514239959]
We propose a hybrid encoder-decoder framework that efficiently computes spatial and temporal attentions in parallel.
We also introduce a semantic clutter-background adversarial loss during training that aids in the region of mitochondria instances from the background.
arXiv Detail & Related papers (2023-03-21T17:58:49Z) - MiShape: 3D Shape Modelling of Mitochondria in Microscopy [65.7909757178576]
We propose an approach to bridge the gap by learning a shape prior for mitochondria termed as MiShape.
MiShape is a generative model learned using implicit representations of mitochondrial shapes.
We demonstrate the representation power of MiShape and its utility for 3D shape reconstruction given a single 2D fluorescence image or a small 3D stack of 2D slices.
arXiv Detail & Related papers (2023-03-02T19:21:21Z) - Generative Deformable Radiance Fields for Disentangled Image Synthesis
of Topology-Varying Objects [52.46838926521572]
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images.
We propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations.
arXiv Detail & Related papers (2022-09-09T08:44:06Z) - Deep Learning Enabled Time-Lapse 3D Cell Analysis [7.094247258573337]
This paper presents a method for time-lapse 3D cell analysis.
We consider the problem of accurately localizing and quantitatively analyzing sub-cellular features.
The code is available on Github and the method is available as a service through the BisQue portal.
arXiv Detail & Related papers (2022-08-17T00:07:25Z) - Implicit Neural Representations for Generative Modeling of Living Cell
Shapes [3.84519093892967]
Deep generative models for cell shape synthesis require a light-weight and flexible representation of the cell shape.
In this work, we propose to use level sets of signed distance functions (SDFs) to represent cell shapes.
We optimize a neural network as an implicit neural representation of the SDF value at any point in a 3D+time domain.
Our results show that shape descriptors of synthetic cells resemble those of real cells, and that our model is able to generate topologically plausible sequences of complex cell shapes in 3D+time.
arXiv Detail & Related papers (2022-07-13T15:28:07Z) - CellCentroidFormer: Combining Self-attention and Convolution for Cell
Detection [4.555723508665994]
We propose a novel hybrid CNN-ViT model for cell detection in microscopy images.
Our centroid-based cell detection method represents cells as ellipses and is end-to-end trainable.
arXiv Detail & Related papers (2022-06-01T09:04:39Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - Enforcing Morphological Information in Fully Convolutional Networks to
Improve Cell Instance Segmentation in Fluorescence Microscopy Images [1.408123603417833]
We propose a novel cell instance segmentation approach based on the well-known U-Net architecture.
To enforce the learning of morphological information per pixel, a deep distance transformer (DDT) acts as a back-bone model.
The obtained results suggest a performance boost over traditional U-Net architectures.
arXiv Detail & Related papers (2021-06-10T15:54:38Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - Neural Cellular Automata Manifold [84.08170531451006]
We show that the neural network architecture of the Neural Cellular Automata can be encapsulated in a larger NN.
This allows us to propose a new model that encodes a manifold of NCA, each of them capable of generating a distinct image.
In biological terms, our approach would play the role of the transcription factors, modulating the mapping of genes into specific proteins that drive cellular differentiation.
arXiv Detail & Related papers (2020-06-22T11:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.