Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models
- URL: http://arxiv.org/abs/2303.08440v2
- Date: Fri, 1 Sep 2023 07:04:30 GMT
- Title: Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models
- Authors: Suhyeon Lee, Hyungjin Chung, Minyoung Park, Jonghyuk Park, Wi-Sun Ryu,
Jong Chul Ye
- Abstract summary: We propose a novel approach using two perpendicular pre-trained 2D diffusion models to solve the 3D inverse problem.
Our method is highly effective for 3D medical image reconstruction tasks, including MRI Z-axis super-resolution, compressed sensing MRI, and sparse-view CT.
- Score: 52.529394863331326
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Diffusion models have become a popular approach for image generation and
reconstruction due to their numerous advantages. However, most diffusion-based
inverse problem-solving methods only deal with 2D images, and even recently
published 3D methods do not fully exploit the 3D distribution prior. To address
this, we propose a novel approach using two perpendicular pre-trained 2D
diffusion models to solve the 3D inverse problem. By modeling the 3D data
distribution as a product of 2D distributions sliced in different directions,
our method effectively addresses the curse of dimensionality. Our experimental
results demonstrate that our method is highly effective for 3D medical image
reconstruction tasks, including MRI Z-axis super-resolution, compressed sensing
MRI, and sparse-view CT. Our method can generate high-quality voxel volumes
suitable for medical applications.
Related papers
- Cross-D Conv: Cross-Dimensional Transferable Knowledge Base via Fourier Shifting Operation [3.69758875412828]
Cross-D Conv operation bridges the dimensional gap by learning the phase shifting in the Fourier domain.
Our method enables seamless weight transfer between 2D and 3D convolution operations, effectively facilitating cross-dimensional learning.
arXiv Detail & Related papers (2024-11-02T13:03:44Z) - DiffusionBlend: Learning 3D Image Prior through Position-aware Diffusion Score Blending for 3D Computed Tomography Reconstruction [12.04892150473192]
We propose a novel framework that enables learning the 3D image prior through position-aware 3D-patch diffusion score blending.
Our algorithm also comes with better or comparable computational efficiency than previous state-of-the-art methods.
arXiv Detail & Related papers (2024-06-14T17:47:50Z) - RadRotator: 3D Rotation of Radiographs with Diffusion Models [0.0]
We introduce a diffusion model-based technology that can rotate the anatomical content of any input radiograph in 3D space.
Similar to previous studies, we used CT volumes to create Digitally Reconstructed Radiographs (DRRs) as the training data for our model.
arXiv Detail & Related papers (2024-04-19T16:55:12Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z) - HoloDiffusion: Training a 3D Diffusion Model using 2D Images [71.1144397510333]
We introduce a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision.
We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
arXiv Detail & Related papers (2023-03-29T07:35:56Z) - Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models [33.343489006271255]
Diffusion models have emerged as the new state-of-the-art generative model with high quality samples.
We propose to augment the 2D diffusion prior with a model-based prior in the remaining direction at test time, such that one can achieve coherent reconstructions across all dimensions.
Our method can be run in a single commodity GPU, and establishes the new state-of-the-art.
arXiv Detail & Related papers (2022-11-19T10:32:21Z) - DreamFusion: Text-to-3D using 2D Diffusion [52.52529213936283]
Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs.
In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis.
Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.
arXiv Detail & Related papers (2022-09-29T17:50:40Z) - Inflating 2D Convolution Weights for Efficient Generation of 3D Medical
Images [35.849240945334]
Two problems prevent effective training of a 3D medical generative model: 3D medical images are expensive to acquire and annotate, and a large number of parameters are involved in 3D convolution.
We propose a novel GAN model called 3D Split&Shuffle-GAN.
We show that our method leads to improved 3D image generation quality with significantly fewer parameters.
arXiv Detail & Related papers (2022-08-08T06:31:00Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.