Manifold-aware Synthesis of High-resolution Diffusion from Structural
Imaging
- URL: http://arxiv.org/abs/2108.04135v2
- Date: Wed, 11 Aug 2021 21:30:21 GMT
- Title: Manifold-aware Synthesis of High-resolution Diffusion from Structural
Imaging
- Authors: Benoit Anctil-Robitaille and Antoine Th\'eberge and Pierre-Marc Jodoin
and Maxime Descoteaux and Christian Desrosiers and Herv\'e Lombaert
- Abstract summary: We propose a network architecture for the direct generation of diffusion tensors (DT) and diffusion orientation distribution functions (dODFs) from high-resolution T1w images.
Our approach improves the fractional anisotropy mean squared error (FA MSE) between the synthesized diffusion and the ground-truth by more than 23%.
While our method is able to generate high-resolution diffusion images from structural inputs in less than 15 seconds, we acknowledge and discuss the limits of diffusion inference solely relying on T1w images.
- Score: 12.96280888284293
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The physical and clinical constraints surrounding diffusion-weighted imaging
(DWI) often limit the spatial resolution of the produced images to voxels up to
8 times larger than those of T1w images. Thus, the detailed information
contained in T1w imagescould help in the synthesis of diffusion images in
higher resolution. However, the non-Euclidean nature of diffusion imaging
hinders current deep generative models from synthesizing physically plausible
images. In this work, we propose the first Riemannian network architecture for
the direct generation of diffusion tensors (DT) and diffusion orientation
distribution functions (dODFs) from high-resolution T1w images. Our integration
of the Log-Euclidean Metric into a learning objective guarantees, unlike
standard Euclidean networks, the mathematically-valid synthesis of diffusion.
Furthermore, our approach improves the fractional anisotropy mean squared error
(FA MSE) between the synthesized diffusion and the ground-truth by more than
23% and the cosine similarity between principal directions by almost 5% when
compared to our baselines. We validate our generated diffusion by comparing the
resulting tractograms to our expected real data. We observe similar fiber
bundles with streamlines having less than 3% difference in length, less than 1%
difference in volume, and a visually close shape. While our method is able to
generate high-resolution diffusion images from structural inputs in less than
15 seconds, we acknowledge and discuss the limits of diffusion inference solely
relying on T1w images. Our results nonetheless suggest a relationship between
the high-level geometry of the brain and the overall white matter architecture.
Related papers
- Edge-preserving noise for diffusion models [4.435514696080208]
We present a novel edge-preserving diffusion model that is a generalization of denoising diffusion probablistic models (DDPM)
In particular, we introduce an edge-aware noise scheduler that varies between edge-preserving and isotropic Gaussian noise.
We show that our model's generative process converges faster to results that more closely match the target distribution.
arXiv Detail & Related papers (2024-10-02T13:29:52Z) - Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced
Hierarchical Diffusion Model [60.27825196999742]
We propose a novel Basic-to-Advanced Hierarchical Diffusion Model, named B2A-HDM, to collaboratively exploit low-dimensional and high-dimensional diffusion models for detailed motion synthesis.
Specifically, the basic diffusion model in low-dimensional latent space provides the intermediate denoising result that is consistent with the textual description.
The advanced diffusion model in high-dimensional latent space focuses on the following detail-enhancing denoising process.
arXiv Detail & Related papers (2023-12-18T06:30:39Z) - SAR-to-Optical Image Translation via Thermodynamics-inspired Network [68.71771171637677]
A Thermodynamics-inspired Network for SAR-to-Optical Image Translation (S2O-TDN) is proposed in this paper.
S2O-TDN follows an explicit design principle derived from thermodynamic theory and enjoys the advantage of explainability.
Experiments on the public SEN1-2 dataset show the advantages of the proposed S2O-TDN over the current methods with more delicate textures and higher quantitative results.
arXiv Detail & Related papers (2023-05-23T09:02:33Z) - High-resolution tomographic reconstruction of optical absorbance through
scattering media using neural fields [25.647287240640356]
We propose NeuDOT, a novel DOT scheme based on neural fields (NF)
NeuDOT achieves submillimetre lateral resolution and resolves complex 3D objects at 14 mm-depth, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2023-04-04T10:13:13Z) - Diffusion Models Generate Images Like Painters: an Analytical Theory of Outline First, Details Later [1.8416014644193066]
We observe that the reverse diffusion process that underlies image generation has the following properties.
Individual trajectories tend to be low-dimensional and resemble 2D rotations'
We find that this solution accurately describes the initial phase of image generation for pretrained models.
arXiv Detail & Related papers (2023-03-04T20:08:57Z) - Diffusion Models are Minimax Optimal Distribution Estimators [49.47503258639454]
We provide the first rigorous analysis on approximation and generalization abilities of diffusion modeling.
We show that when the true density function belongs to the Besov space and the empirical score matching loss is properly minimized, the generated data distribution achieves the nearly minimax optimal estimation rates.
arXiv Detail & Related papers (2023-03-03T11:31:55Z) - Dimensionality-Varying Diffusion Process [52.52681373641533]
Diffusion models learn to reverse a signal destruction process to generate new data.
We make a theoretical generalization of the forward diffusion process via signal decomposition.
We show that our strategy facilitates high-resolution image synthesis and improves FID of diffusion model trained on FFHQ at $1024times1024$ resolution from 52.40 to 10.46.
arXiv Detail & Related papers (2022-11-29T09:05:55Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z) - Manifold-Aware CycleGAN for High-Resolution Structural-to-DTI Synthesis [8.829738147738222]
We propose a manifold-aware CycleGAN that learns the generation of high-resolution DTI from unpaired T1w images.
Our method is able to generate realistic high-resolution DTI that can be used to compute diffusion-based metrics and potentially run fiber tractography algorithms.
arXiv Detail & Related papers (2020-04-01T00:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.