SADM: Sequence-Aware Diffusion Model for Longitudinal Medical Image
Generation
- URL: http://arxiv.org/abs/2212.08228v1
- Date: Fri, 16 Dec 2022 01:35:27 GMT
- Title: SADM: Sequence-Aware Diffusion Model for Longitudinal Medical Image
Generation
- Authors: Jee Seok Yoon, Chenghao Zhang, Heung-Il Suk, Jia Guo, Xiaoxiao Li
- Abstract summary: We propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images.
Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model.
Our experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods.
- Score: 23.174391810184602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human organs constantly undergo anatomical changes due to a complex mix of
short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently,
prior knowledge of these factors will be beneficial when modeling their future
state, i.e., via image generation. However, most of the medical image
generation tasks only rely on the input from a single image, thus ignoring the
sequential dependency even when longitudinal data is available. Sequence-aware
deep generative models, where model input is a sequence of ordered and
timestamped images, are still underexplored in the medical imaging domain that
is featured by several unique challenges: 1) Sequences with various lengths; 2)
Missing data or frame, and 3) High dimensionality. To this end, we propose a
sequence-aware diffusion model (SADM) for the generation of longitudinal
medical images. Recently, diffusion models have shown promising results on
high-fidelity image generation. Our method extends this new technique by
introducing a sequence-aware transformer as the conditional module in a
diffusion model. The novel design enables learning longitudinal dependency even
with missing data during training and allows autoregressive generation of a
sequence of images during inference. Our extensive experiments on 3D
longitudinal medical images demonstrate the effectiveness of SADM compared with
baselines and alternative methods.
Related papers
- Conditional Diffusion Model for Longitudinal Medical Image Generation [4.649475179575047]
Alzheimers disease progresses slowly and involves complex interaction between biological factors.
Longitudinal medical imaging data can capture this progression over time.
However, longitudinal data frequently encounter issues such as missing data due to patient dropouts, irregular follow-up intervals, and varying lengths of observation periods.
arXiv Detail & Related papers (2024-11-07T06:44:47Z) - Optimizing Resource Consumption in Diffusion Models through Hallucination Early Detection [87.22082662250999]
We introduce HEaD (Hallucination Early Detection), a new paradigm designed to swiftly detect incorrect generations at the beginning of the diffusion process.
We demonstrate that using HEaD saves computational resources and accelerates the generation process to get a complete image.
Our findings reveal that HEaD can save up to 12% of the generation time on a two objects scenario.
arXiv Detail & Related papers (2024-09-16T18:00:00Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - BiomedJourney: Counterfactual Biomedical Image Generation by
Instruction-Learning from Multimodal Patient Journeys [99.7082441544384]
We present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning.
We use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression.
The resulting triples are then used to train a latent diffusion model for counterfactual biomedical image generation.
arXiv Detail & Related papers (2023-10-16T18:59:31Z) - Explicit Temporal Embedding in Deep Generative Latent Models for
Longitudinal Medical Image Synthesis [1.1339580074756188]
We propose a novel joint learning scheme to embed temporal dependencies in the latent space of GANs.
This allows us to synthesize continuous, smooth, and high-quality longitudinal volumetric data with limited supervision.
We show the effectiveness of our approach on three datasets containing different longitudinal dependencies.
arXiv Detail & Related papers (2023-01-13T10:31:27Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Multitask Brain Tumor Inpainting with Diffusion Models: A Methodological
Report [0.0]
Inpainting algorithms are a subset of DL generative models that can alter one or more regions of an input image.
The performance of these algorithms is frequently suboptimal due to their limited output variety.
Denoising diffusion probabilistic models (DDPMs) are a recently introduced family of generative networks that can generate results of comparable quality to GANs.
arXiv Detail & Related papers (2022-10-21T17:13:14Z) - Diffusion Deformable Model for 4D Temporal Medical Image Generation [47.03842361418344]
Temporal volume images with 3D+t (4D) information are often used in medical imaging to statistically analyze temporal dynamics or capture disease progression.
We present a novel deep learning model that generates intermediate temporal volumes between source and target volumes.
arXiv Detail & Related papers (2022-06-27T13:37:57Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - RADIOGAN: Deep Convolutional Conditional Generative adversarial Network
To Generate PET Images [3.947298454012977]
We propose a deep convolutional conditional generative adversarial network to generate MIP positron emission tomography image (PET)
The advantage of our proposed method consists of one model that is capable of generating different classes of lesions trained on a small sample size for each class of lesion.
In addition, we show that a walk through a latent space can be used as a tool to evaluate the images generated.
arXiv Detail & Related papers (2020-03-19T10:14:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.