Cortical Surface Diffusion Generative Models
- URL: http://arxiv.org/abs/2402.04753v1
- Date: Wed, 7 Feb 2024 11:12:09 GMT
- Title: Cortical Surface Diffusion Generative Models
- Authors: Zhenshan Xie, Simon Dahan, Logan Z. J. Williams, M. Jorge Cardoso,
Emma C. Robinson
- Abstract summary: Cortical surface analysis has gained increased prominence, given its potential implications for neurological and developmental disorders.
Traditional vision diffusion models, while effective in generating natural images, present limitations in capturing intricate development patterns in neuroimaging due to limited datasets.
In this work, we proposed a novel diffusion model for the generation of cortical surface metrics, using modified surface vision transformers as the principal architecture.
- Score: 4.050721206558478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cortical surface analysis has gained increased prominence, given its
potential implications for neurological and developmental disorders.
Traditional vision diffusion models, while effective in generating natural
images, present limitations in capturing intricate development patterns in
neuroimaging due to limited datasets. This is particularly true for generating
cortical surfaces where individual variability in cortical morphology is high,
leading to an urgent need for better methods to model brain development and
diverse variability inherent across different individuals. In this work, we
proposed a novel diffusion model for the generation of cortical surface
metrics, using modified surface vision transformers as the principal
architecture. We validate our method in the developing Human Connectome Project
(dHCP), the results suggest our model demonstrates superior performance in
capturing the intricate details of evolving cortical surfaces. Furthermore, our
model can generate high-quality realistic samples of cortical surfaces
conditioned on postmenstrual age(PMA) at scan.
Related papers
- Geometric Trajectory Diffusion Models [58.853975433383326]
Generative models have shown great promise in generating 3D geometric systems.
Existing approaches only operate on static structures, neglecting the fact that physical systems are always dynamic in nature.
We propose geometric trajectory diffusion models (GeoTDM), the first diffusion model for modeling the temporal distribution of 3D geometric trajectories.
arXiv Detail & Related papers (2024-10-16T20:36:41Z) - Physics-Informed Latent Diffusion for Multimodal Brain MRI Synthesis [43.82741134285203]
We present a physics-informed generative model capable of synthesizing a variable number of brain MRI modalities.
Our approach utilizes latent diffusion models and a two-step generative process.
Experiments demonstrate the efficacy of this approach in generating unseen MR contrasts and preserving physical plausibility.
arXiv Detail & Related papers (2024-09-20T14:21:34Z) - Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis [13.629617915974531]
Deformation-Recovery Diffusion Model (DRDM) is a diffusion-based generative model based on deformation diffusion and recovery.
DRDM is trained to learn to recover unreasonable deformation components, thereby restoring each randomly deformed image to a realistic distribution.
Experimental results in cardiac MRI and pulmonary CT show DRDM is capable of creating diverse, large (over 10% image size deformation scale) deformations.
arXiv Detail & Related papers (2024-07-10T01:26:48Z) - Inpainting Pathology in Lumbar Spine MRI with Latent Diffusion [4.410798232767917]
We propose an efficient method for inpainting pathological features onto healthy anatomy in MRI.
We evaluate the method's ability to insert disc herniation and central canal stenosis in lumbar spine sagittal T2 MRI.
arXiv Detail & Related papers (2024-06-04T16:47:47Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Spatio-Temporal Encoding of Brain Dynamics with Surface Masked Autoencoders [10.097983222759884]
Surface Masked AutoEncoder (sMAE) and surface Masked AutoEncoder (MAE)
These models are trained to reconstruct cortical feature maps from masked versions of the input by learning strong latent representations of cortical development and structure function.
Results show that (v)sMAE pre-trained models improve phenotyping prediction performance on multiple tasks by $ge 26%$, and offer faster convergence relative to models trained from scratch.
arXiv Detail & Related papers (2023-08-10T10:01:56Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - Latent Diffusion Energy-Based Model for Interpretable Text Modeling [104.85356157724372]
We introduce a novel symbiosis between the diffusion models and latent space EBMs in a variational learning framework.
We develop a geometric clustering-based regularization jointly with the information bottleneck to further improve the quality of the learned latent space.
arXiv Detail & Related papers (2022-06-13T03:41:31Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Neural Pharmacodynamic State Space Modeling [1.589915930948668]
We propose a deep generative model that makes use of a novel attention-based neural architecture inspired by the physics of how treatments affect disease state.
Our proposed model yields significant improvements in generalization and, on real-world clinical data, provides interpretable insights into the dynamics of cancer progression.
arXiv Detail & Related papers (2021-02-22T17:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.