Conditional Diffusion Model for Longitudinal Medical Image Generation
- URL: http://arxiv.org/abs/2411.05860v1
- Date: Thu, 07 Nov 2024 06:44:47 GMT
- Title: Conditional Diffusion Model for Longitudinal Medical Image Generation
- Authors: Duy-Phuong Dao, Hyung-Jeong Yang, Jahae Kim,
- Abstract summary: Alzheimers disease progresses slowly and involves complex interaction between biological factors.
Longitudinal medical imaging data can capture this progression over time.
However, longitudinal data frequently encounter issues such as missing data due to patient dropouts, irregular follow-up intervals, and varying lengths of observation periods.
- Score: 4.649475179575047
- License:
- Abstract: Alzheimers disease progresses slowly and involves complex interaction between various biological factors. Longitudinal medical imaging data can capture this progression over time. However, longitudinal data frequently encounter issues such as missing data due to patient dropouts, irregular follow-up intervals, and varying lengths of observation periods. To address these issues, we designed a diffusion-based model for 3D longitudinal medical imaging generation using single magnetic resonance imaging (MRI). This involves the injection of a conditioning MRI and time-visit encoding to the model, enabling control in change between source and target images. The experimental results indicate that the proposed method generates higher-quality images compared to other competing methods.
Related papers
- Counterfactual MRI Data Augmentation using Conditional Denoising Diffusion Generative Models [0.0]
Deep learning models in medical imaging face challenges in generalizability and robustness due to variations in image acquisition parameters (IAP)
We introduce a novel method using conditional denoising diffusion generative models (cDDGMs) to generate counterfactual magnetic resonance (MR) images that simulate different IAP without altering patient anatomy.
arXiv Detail & Related papers (2024-10-31T11:29:41Z) - Individualized multi-horizon MRI trajectory prediction for Alzheimer's Disease [0.0]
We train a novel architecture to build a latent space distribution which can be sampled from to generate future predictions of changing anatomy.
By comparing to several alternatives, we show that our model produces more individualized images with higher resolution.
arXiv Detail & Related papers (2024-08-04T13:09:06Z) - Temporal Evolution of Knee Osteoarthritis: A Diffusion-based Morphing Model for X-ray Medical Image Synthesis [6.014316825270666]
We introduce a novel deep-learning model designed to synthesize intermediate X-ray images between a patient's healthy knee and severe KOA stages.
During the testing phase, based on a healthy knee X-ray, the proposed model can produce a continuous and effective sequence of KOA X-ray images.
Our approach integrates diffusion and morphing modules, enabling the model to capture spatial morphing details between source and target knee X-ray images.
arXiv Detail & Related papers (2024-08-01T20:00:18Z) - Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling [49.52787013516891]
Our proposed Longitudinal Transformer for Survival Analysis (LTSA) enables dynamic disease prognosis from longitudinal medical imaging.
A temporal attention analysis also suggested that, while the most recent image is typically the most influential, prior imaging still provides additional prognostic value.
arXiv Detail & Related papers (2024-05-14T17:15:28Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Explicit Temporal Embedding in Deep Generative Latent Models for
Longitudinal Medical Image Synthesis [1.1339580074756188]
We propose a novel joint learning scheme to embed temporal dependencies in the latent space of GANs.
This allows us to synthesize continuous, smooth, and high-quality longitudinal volumetric data with limited supervision.
We show the effectiveness of our approach on three datasets containing different longitudinal dependencies.
arXiv Detail & Related papers (2023-01-13T10:31:27Z) - SADM: Sequence-Aware Diffusion Model for Longitudinal Medical Image
Generation [23.174391810184602]
We propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images.
Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model.
Our experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods.
arXiv Detail & Related papers (2022-12-16T01:35:27Z) - Diffusion Deformable Model for 4D Temporal Medical Image Generation [47.03842361418344]
Temporal volume images with 3D+t (4D) information are often used in medical imaging to statistically analyze temporal dynamics or capture disease progression.
We present a novel deep learning model that generates intermediate temporal volumes between source and target volumes.
arXiv Detail & Related papers (2022-06-27T13:37:57Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.