Shape-guided Conditional Latent Diffusion Models for Synthesising Brain
Vasculature
- URL: http://arxiv.org/abs/2308.06781v1
- Date: Sun, 13 Aug 2023 14:27:28 GMT
- Title: Shape-guided Conditional Latent Diffusion Models for Synthesising Brain
Vasculature
- Authors: Yash Deo, Haoran Dou, Nishant Ravikumar, Alejandro F. Frangi, Toni
Lassila
- Abstract summary: The Circle of Willis (CoW) is the part of cerebral vasculature responsible for delivering blood to the brain.
We propose a novel generative approach utilising a conditional latent diffusion model with shape and anatomical guidance to generate realistic 3D CoW segmentations.
- Score: 47.59734181424857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Circle of Willis (CoW) is the part of cerebral vasculature responsible
for delivering blood to the brain. Understanding the diverse anatomical
variations and configurations of the CoW is paramount to advance research on
cerebrovascular diseases and refine clinical interventions. However,
comprehensive investigation of less prevalent CoW variations remains
challenging because of the dominance of a few commonly occurring
configurations. We propose a novel generative approach utilising a conditional
latent diffusion model with shape and anatomical guidance to generate realistic
3D CoW segmentations, including different phenotypical variations. Our
conditional latent diffusion model incorporates shape guidance to better
preserve vessel continuity and demonstrates superior performance when compared
to alternative generative models, including conditional variants of 3D GAN and
3D VAE. We observed that our model generated CoW variants that are more
realistic and demonstrate higher visual fidelity than competing approaches with
an FID score 53\% better than the best-performing GAN-based model.
Related papers
- Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images [5.395912799904941]
variations in CT scanner models and acquisition protocols introduce significant variability in the extracted radiomic features.
LTDiff++ is a multiscale latent diffusion model designed to enhance feature extraction in medical imaging.
arXiv Detail & Related papers (2024-10-05T02:13:57Z) - Similarity-aware Syncretic Latent Diffusion Model for Medical Image Translation with Representation Learning [15.234393268111845]
Non-contrast CT (NCCT) imaging may reduce image contrast and anatomical visibility, potentially increasing diagnostic uncertainty.
We propose a novel Syncretic generative model based on the latent diffusion model for medical image translation (S$2$LDM)
S$2$LDM enhances the similarity in distinct modal images via syncretic encoding and diffusing, promoting amalgamated information in the latent space and generating medical images with more details in contrast-enhanced regions.
arXiv Detail & Related papers (2024-06-20T03:54:41Z) - Diffusion Models in Low-Level Vision: A Survey [82.77962165415153]
diffusion model-based solutions have emerged as widely acclaimed for their ability to produce samples of superior quality and diversity.
We present three generic diffusion modeling frameworks and explore their correlations with other deep generative models.
We summarize extended diffusion models applied in other tasks, including medical, remote sensing, and video scenarios.
arXiv Detail & Related papers (2024-06-17T01:49:27Z) - Bridging Generative and Discriminative Models for Unified Visual
Perception with Diffusion Priors [56.82596340418697]
We propose a simple yet effective framework comprising a pre-trained Stable Diffusion (SD) model containing rich generative priors, a unified head (U-head) capable of integrating hierarchical representations, and an adapted expert providing discriminative priors.
Comprehensive investigations unveil potential characteristics of Vermouth, such as varying granularity of perception concealed in latent variables at distinct time steps and various U-net stages.
The promising results demonstrate the potential of diffusion models as formidable learners, establishing their significance in furnishing informative and robust visual representations.
arXiv Detail & Related papers (2024-01-29T10:36:57Z) - DiffHPE: Robust, Coherent 3D Human Pose Lifting with Diffusion [54.0238087499699]
We show that diffusion models enhance the accuracy, robustness, and coherence of human pose estimations.
We introduce DiffHPE, a novel strategy for harnessing diffusion models in 3D-HPE.
Our findings indicate that while standalone diffusion models provide commendable performance, their accuracy is even better in combination with supervised models.
arXiv Detail & Related papers (2023-09-04T12:54:10Z) - Classification of lung cancer subtypes on CT images with synthetic
pathological priors [41.75054301525535]
Cross-scale associations exist in the image patterns between the same case's CT images and its pathological images.
We propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on CT images.
arXiv Detail & Related papers (2023-08-09T02:04:05Z) - Multiscale Metamorphic VAE for 3D Brain MRI Synthesis [5.060516201839319]
Generative modeling of 3D brain MRIs presents difficulties in achieving high visual fidelity while ensuring sufficient coverage of the data distribution.
In this work, we propose to address this challenge with composable, multiscale morphological transformations in a variational autoencoder framework.
We show substantial performance improvements in FID while retaining comparable, or superior, reconstruction quality compared to prior work based on VAEs and generative adversarial networks (GANs)
arXiv Detail & Related papers (2023-01-09T09:15:30Z) - Unsupervised ensemble-based phenotyping helps enhance the
discoverability of genes related to heart morphology [57.25098075813054]
We propose a new framework for gene discovery entitled Un Phenotype Ensembles.
It builds a redundant yet highly expressive representation by pooling a set of phenotypes learned in an unsupervised manner.
These phenotypes are then analyzed via (GWAS), retaining only highly confident and stable associations.
arXiv Detail & Related papers (2023-01-07T18:36:44Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - Diffusion Models: A Comprehensive Survey of Methods and Applications [10.557289965753437]
Diffusion models are a class of deep generative models that have shown impressive results on various tasks with dense theoretical founding.
Recent studies have shown great enthusiasm on improving the performance of diffusion model.
arXiv Detail & Related papers (2022-09-02T02:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.