Implicit Neural Representations for Generative Modeling of Living Cell
Shapes
- URL: http://arxiv.org/abs/2207.06283v1
- Date: Wed, 13 Jul 2022 15:28:07 GMT
- Title: Implicit Neural Representations for Generative Modeling of Living Cell
Shapes
- Authors: David Wiesner, Julian Suk, Sven Dummer, David Svoboda, Jelmer M.
Wolterink
- Abstract summary: Deep generative models for cell shape synthesis require a light-weight and flexible representation of the cell shape.
In this work, we propose to use level sets of signed distance functions (SDFs) to represent cell shapes.
We optimize a neural network as an implicit neural representation of the SDF value at any point in a 3D+time domain.
Our results show that shape descriptors of synthetic cells resemble those of real cells, and that our model is able to generate topologically plausible sequences of complex cell shapes in 3D+time.
- Score: 3.84519093892967
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Methods allowing the synthesis of realistic cell shapes could help generate
training data sets to improve cell tracking and segmentation in biomedical
images. Deep generative models for cell shape synthesis require a light-weight
and flexible representation of the cell shape. However, commonly used
voxel-based representations are unsuitable for high-resolution shape synthesis,
and polygon meshes have limitations when modeling topology changes such as cell
growth or mitosis. In this work, we propose to use level sets of signed
distance functions (SDFs) to represent cell shapes. We optimize a neural
network as an implicit neural representation of the SDF value at any point in a
3D+time domain. The model is conditioned on a latent code, thus allowing the
synthesis of new and unseen shape sequences. We validate our approach
quantitatively and qualitatively on C. elegans cells that grow and divide, and
lung cancer cells with growing complex filopodial protrusions. Our results show
that shape descriptors of synthetic cells resemble those of real cells, and
that our model is able to generate topologically plausible sequences of complex
cell shapes in 3D+time.
Related papers
- Generative 3D Cardiac Shape Modelling for In-Silico Trials [0.0]
We propose a deep learning method to model and generate synthetic aortic shapes.
The network is trained on a dataset of aortic root meshes reconstructed from CT images.
By sampling from the learned embedding vectors, we can generate novel shapes that resemble real patient anatomies.
arXiv Detail & Related papers (2024-09-24T12:59:18Z) - An End-to-End Deep Learning Generative Framework for Refinable Shape
Matching and Generation [45.820901263103806]
Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs)
We develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space.
We extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability.
arXiv Detail & Related papers (2024-03-10T21:33:53Z) - The cell signaling structure function [0.16060719742433224]
Live cell microscopy captures 5-D $(xy,z,channel,time)$ movies that display patterns of cellular motion and signaling dynamics.
We present here an approach to finding patterns of cell signaling dynamics in 5-D live cell movies unique in requiring no priori knowledge of expected pattern dynamics, and no training data.
arXiv Detail & Related papers (2024-01-04T19:25:00Z) - Generative modeling of living cells with SO(3)-equivariant implicit
neural representations [2.146287726016005]
We propose to represent living cell shapes as level sets of signed distance functions (SDFs) which are estimated by neural networks.
We optimize a fully-connected neural network to provide an implicit representation of the SDF value at any point in a 3D+time domain.
We demonstrate the effectiveness of this approach on cells that exhibit rapid deformations (Platynereis dumerilii), cells that grow and divide (C. elegans), and cells that have growing and branching filopodial protrusions (A549 human lung carcinoma cells)
arXiv Detail & Related papers (2023-04-18T12:51:18Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Conformal Isometry of Lie Group Representation in Recurrent Network of
Grid Cells [52.425628028229156]
We study the properties of grid cells using recurrent network models.
We focus on a simple non-linear recurrent model that underlies the continuous attractor neural networks of grid cells.
arXiv Detail & Related papers (2022-10-06T05:26:49Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Generative Deformable Radiance Fields for Disentangled Image Synthesis
of Topology-Varying Objects [52.46838926521572]
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images.
We propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations.
arXiv Detail & Related papers (2022-09-09T08:44:06Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Enforcing Morphological Information in Fully Convolutional Networks to
Improve Cell Instance Segmentation in Fluorescence Microscopy Images [1.408123603417833]
We propose a novel cell instance segmentation approach based on the well-known U-Net architecture.
To enforce the learning of morphological information per pixel, a deep distance transformer (DDT) acts as a back-bone model.
The obtained results suggest a performance boost over traditional U-Net architectures.
arXiv Detail & Related papers (2021-06-10T15:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.