Diffusion Deformable Model for 4D Temporal Medical Image Generation
- URL: http://arxiv.org/abs/2206.13295v1
- Date: Mon, 27 Jun 2022 13:37:57 GMT
- Title: Diffusion Deformable Model for 4D Temporal Medical Image Generation
- Authors: Boah Kim, Jong Chul Ye
- Abstract summary: Temporal volume images with 3D+t (4D) information are often used in medical imaging to statistically analyze temporal dynamics or capture disease progression.
We present a novel deep learning model that generates intermediate temporal volumes between source and target volumes.
- Score: 47.03842361418344
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Temporal volume images with 3D+t (4D) information are often used in medical
imaging to statistically analyze temporal dynamics or capture disease
progression. Although deep-learning-based generative models for natural images
have been extensively studied, approaches for temporal medical image generation
such as 4D cardiac volume data are limited. In this work, we present a novel
deep learning model that generates intermediate temporal volumes between source
and target volumes. Specifically, we propose a diffusion deformable model (DDM)
by adapting the denoising diffusion probabilistic model that has recently been
widely investigated for realistic image generation. Our proposed DDM is
composed of the diffusion and the deformation modules so that DDM can learn
spatial deformation information between the source and target volumes and
provide a latent code for generating intermediate frames along a geodesic path.
Once our model is trained, the latent code estimated from the diffusion module
is simply interpolated and fed into the deformation module, which enables DDM
to generate temporal frames along the continuous trajectory while preserving
the topology of the source image. We demonstrate the proposed method with the
4D cardiac MR image generation between the diastolic and systolic phases for
each subject. Compared to the existing deformation methods, our DDM achieves
high performance on temporal volume generation.
Related papers
- Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis [13.629617915974531]
Deformation-Recovery Diffusion Model (DRDM) is a diffusion-based generative model based on deformation diffusion and recovery.
DRDM is trained to learn to recover unreasonable deformation components, thereby restoring each randomly deformed image to a realistic distribution.
Experimental results in cardiac MRI and pulmonary CT show DRDM is capable of creating diverse, large (over 10% image size deformation scale) deformations.
arXiv Detail & Related papers (2024-07-10T01:26:48Z) - 4Diffusion: Multi-view Video Diffusion Model for 4D Generation [55.82208863521353]
Current 4D generation methods have achieved noteworthy efficacy with the aid of advanced diffusion generative models.
We propose a novel 4D generation pipeline, namely 4Diffusion, aimed at generating spatial-temporally consistent 4D content from a monocular video.
arXiv Detail & Related papers (2024-05-31T08:18:39Z) - TC-DiffRecon: Texture coordination MRI reconstruction method based on
diffusion model and modified MF-UNet method [2.626378252978696]
We propose a novel diffusion model-based MRI reconstruction method, named TC-DiffRecon, which does not rely on a specific acceleration factor for training.
We also suggest the incorporation of the MF-UNet module, designed to enhance the quality of MRI images generated by the model.
arXiv Detail & Related papers (2024-02-17T13:09:00Z) - Introducing Shape Prior Module in Diffusion Model for Medical Image
Segmentation [7.7545714516743045]
We propose an end-to-end framework called VerseDiff-UNet, which leverages the denoising diffusion probabilistic model (DDPM)
Our approach integrates the diffusion model into a standard U-shaped architecture.
We evaluate our method on a single dataset of spine images acquired through X-ray imaging.
arXiv Detail & Related papers (2023-09-12T03:05:00Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - Dynamic Mode Decomposition in Adaptive Mesh Refinement and Coarsening
Simulations [58.720142291102135]
Dynamic Mode Decomposition (DMD) is a powerful data-driven method used to extract coherent schemes.
This paper proposes a strategy to enable DMD to extract from observations with different mesh topologies and dimensions.
arXiv Detail & Related papers (2021-04-28T22:14:25Z) - Latent linear dynamics in spatiotemporal medical data [0.0]
We present an unsupervised model that identifies the underlying dynamics of the system, only based on the sequential images.
The model maps the input to a low-dimensional latent space wherein a linear relationship holds between a hidden state process and the observed latent process.
Knowledge of the system dynamics enables denoising, imputation of missing values and extrapolation of future image frames.
arXiv Detail & Related papers (2021-03-01T11:42:21Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.