A Diffusion Model Predicts 3D Shapes from 2D Microscopy Images
- URL: http://arxiv.org/abs/2208.14125v2
- Date: Wed, 31 Aug 2022 06:14:31 GMT
- Title: A Diffusion Model Predicts 3D Shapes from 2D Microscopy Images
- Authors: Dominik J. E. Waibel, Ernst R\"oell, Bastian Rieck, Raja Giryes,
Carsten Marr
- Abstract summary: We introduce DISPR, a diffusion-based model for solving the inverse problem of 3D cell shape prediction from 2D microscopy images.
We demonstrate that diffusion models can be applied to inverse problems in 3D, and that they learn to reconstruct 3D shapes with realistic morphological features from 2D microscopy images.
- Score: 36.17295590429516
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models are a class of generative models, showing superior
performance as compared to other generative models in creating realistic images
when trained on natural image datasets. We introduce DISPR, a diffusion-based
model for solving the inverse problem of three-dimensional (3D) cell shape
prediction from two-dimensional (2D) single cell microscopy images. Using the
2D microscopy image as a prior, DISPR is conditioned to predict realistic 3D
shape reconstructions. To showcase the applicability of DISPR as a data
augmentation tool in a feature-based single cell classification task, we
extract morphological features from the cells grouped into six highly
imbalanced classes. Adding features from predictions of DISPR to the three
minority classes improved the macro F1 score from $F1_\text{macro} = 55.2 \pm
4.6\%$ to $F1_\text{macro} = 72.2 \pm 4.9\%$. With our method being the first
to employ a diffusion-based model in this context, we demonstrate that
diffusion models can be applied to inverse problems in 3D, and that they learn
to reconstruct 3D shapes with realistic morphological features from 2D
microscopy images.
Related papers
- Zero-1-to-G: Taming Pretrained 2D Diffusion Model for Direct 3D Generation [66.75243908044538]
We introduce Zero-1-to-G, a novel approach to direct 3D generation on Gaussian splats using pretrained 2D diffusion models.
To incorporate 3D awareness, we introduce cross-view and cross-attribute attention layers, which capture complex correlations and enforce 3D consistency across generated splats.
This makes Zero-1-to-G the first direct image-to-3D generative model to effectively utilize pretrained 2D diffusion priors, enabling efficient training and improved generalization to unseen objects.
arXiv Detail & Related papers (2025-01-09T18:37:35Z) - Diffusion priors for Bayesian 3D reconstruction from incomplete measurements [0.0]
We explore the use of diffusion models as priors combined with experimental data within a Bayesian framework.
We train diffusion models that generate coarse-grained 3D structures at a medium resolution and integrate these with incomplete and noisy experimental data.
We find that posterior sampling with diffusion model priors allows for 3D reconstruction from very sparse, low-resolution and partial observations.
arXiv Detail & Related papers (2024-12-19T14:28:00Z) - DSplats: 3D Generation by Denoising Splats-Based Multiview Diffusion Models [67.50989119438508]
We introduce DSplats, a novel method that directly denoises multiview images using Gaussian-based Reconstructors to produce realistic 3D assets.
Our experiments demonstrate that DSplats not only produces high-quality, spatially consistent outputs, but also sets a new standard in single-image to 3D reconstruction.
arXiv Detail & Related papers (2024-12-11T07:32:17Z) - Cascaded Diffusion Models for 2D and 3D Microscopy Image Synthesis to Enhance Cell Segmentation [1.1454121287632515]
We propose a novel framework for synthesizing densely annotated 2D and 3D cell microscopy images.
Our method synthesizes 2D and 3D cell masks from sparse 2D annotations using multi-level diffusion models and NeuS, a 3D surface reconstruction approach.
We show that training a segmentation model with a combination of our synthetic data and real data improves cell segmentation performance by up to 9% across multiple datasets.
arXiv Detail & Related papers (2024-11-18T12:22:37Z) - 3D-VirtFusion: Synthetic 3D Data Augmentation through Generative Diffusion Models and Controllable Editing [52.68314936128752]
We propose a new paradigm to automatically generate 3D labeled training data by harnessing the power of pretrained large foundation models.
For each target semantic class, we first generate 2D images of a single object in various structure and appearance via diffusion models and chatGPT generated text prompts.
We transform these augmented images into 3D objects and construct virtual scenes by random composition.
arXiv Detail & Related papers (2024-08-25T09:31:22Z) - GSD: View-Guided Gaussian Splatting Diffusion for 3D Reconstruction [52.04103235260539]
We present a diffusion model approach based on Gaussian Splatting representation for 3D object reconstruction from a single view.
The model learns to generate 3D objects represented by sets of GS ellipsoids.
The final reconstructed objects explicitly come with high-quality 3D structure and texture, and can be efficiently rendered in arbitrary views.
arXiv Detail & Related papers (2024-07-05T03:43:08Z) - HoloDiffusion: Training a 3D Diffusion Model using 2D Images [71.1144397510333]
We introduce a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision.
We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
arXiv Detail & Related papers (2023-03-29T07:35:56Z) - Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models [52.529394863331326]
We propose a novel approach using two perpendicular pre-trained 2D diffusion models to solve the 3D inverse problem.
Our method is highly effective for 3D medical image reconstruction tasks, including MRI Z-axis super-resolution, compressed sensing MRI, and sparse-view CT.
arXiv Detail & Related papers (2023-03-15T08:28:06Z) - 3D Neural Field Generation using Triplane Diffusion [37.46688195622667]
We present an efficient diffusion-based model for 3D-aware generation of neural fields.
Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields.
We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.
arXiv Detail & Related papers (2022-11-30T01:55:52Z) - DiffusionSDF: Conditional Generative Modeling of Signed Distance
Functions [42.015077094731815]
DiffusionSDF is a generative model for shape completion, single-view reconstruction, and reconstruction of real-scanned point clouds.
We use neural signed distance functions (SDFs) as our 3D representation to parameterize the geometry of various signals (e.g., point clouds, 2D images) through neural networks.
arXiv Detail & Related papers (2022-11-24T18:59:01Z) - Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models [33.343489006271255]
Diffusion models have emerged as the new state-of-the-art generative model with high quality samples.
We propose to augment the 2D diffusion prior with a model-based prior in the remaining direction at test time, such that one can achieve coherent reconstructions across all dimensions.
Our method can be run in a single commodity GPU, and establishes the new state-of-the-art.
arXiv Detail & Related papers (2022-11-19T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.