3D Vessel Reconstruction from Sparse-View Dynamic DSA Images via Vessel Probability Guided Attenuation Learning
- URL: http://arxiv.org/abs/2405.10705v1
- Date: Fri, 17 May 2024 11:23:33 GMT
- Title: 3D Vessel Reconstruction from Sparse-View Dynamic DSA Images via Vessel Probability Guided Attenuation Learning
- Authors: Zhentao Liu, Huangxuan Zhao, Wenhui Qin, Zhenghong Zhou, Xinggang Wang, Wenping Wang, Xiaochun Lai, Chuansheng Zheng, Dinggang Shen, Zhiming Cui,
- Abstract summary: Current commercial Digital Subtraction Angiography (DSA) systems typically demand hundreds of scanning views to perform reconstruction.
The dynamic blood flow and insufficient input of sparse-view DSA images present significant challenges to the 3D vessel reconstruction task.
We propose to use a time-agnostic vessel probability field to solve this problem effectively.
- Score: 79.60829508459753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital Subtraction Angiography (DSA) is one of the gold standards in vascular disease diagnosing. With the help of contrast agent, time-resolved 2D DSA images deliver comprehensive insights into blood flow information and can be utilized to reconstruct 3D vessel structures. Current commercial DSA systems typically demand hundreds of scanning views to perform reconstruction, resulting in substantial radiation exposure. However, sparse-view DSA reconstruction, aimed at reducing radiation dosage, is still underexplored in the research community. The dynamic blood flow and insufficient input of sparse-view DSA images present significant challenges to the 3D vessel reconstruction task. In this study, we propose to use a time-agnostic vessel probability field to solve this problem effectively. Our approach, termed as vessel probability guided attenuation learning, represents the DSA imaging as a complementary weighted combination of static and dynamic attenuation fields, with the weights derived from the vessel probability field. Functioning as a dynamic mask, vessel probability provides proper gradients for both static and dynamic fields adaptive to different scene types. This mechanism facilitates a self-supervised decomposition between static backgrounds and dynamic contrast agent flow, and significantly improves the reconstruction quality. Our model is trained by minimizing the disparity between synthesized projections and real captured DSA images. We further employ two training strategies to improve our reconstruction quality: (1) coarse-to-fine progressive training to achieve better geometry and (2) temporal perturbed rendering loss to enforce temporal consistency. Experimental results have demonstrated superior quality on both 3D vessel reconstruction and 2D view synthesis.
Related papers
- Physics-Informed Learning for Time-Resolved Angiographic Contrast Agent
Concentration Reconstruction [3.3359894496511053]
We present a neural network-based model that is trained on a dataset of image-based blood flow simulations.
The model predicts the spatially averaged contrast agent concentration for each centerline point of the vasculature over time.
Our approach demonstrates the potential of the integration of machine learning and blood flow simulations in time-resolved angiographic flow reconstruction.
arXiv Detail & Related papers (2024-03-04T12:37:52Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Deep Cardiac MRI Reconstruction with ADMM [7.694990352622926]
We present a deep learning (DL)-based method for accelerated cine and multi-contrast reconstruction in the context of cardiac imaging.
Our method optimize in both the image and k-space domains, allowing for high reconstruction fidelity.
arXiv Detail & Related papers (2023-10-10T13:46:11Z) - TiAVox: Time-aware Attenuation Voxels for Sparse-view 4D DSA
Reconstruction [34.1903749611458]
We propose a Time-aware Attenuation Voxel (TiAVox) approach for sparse-view 4D DSA reconstruction.
TiAVox introduces 4D attenuation voxel grids, which reflect attenuation properties from both spatial and temporal dimensions.
We validated the TiAVox approach on both clinical and simulated datasets.
arXiv Detail & Related papers (2023-09-05T15:34:37Z) - Two-and-a-half Order Score-based Model for Solving 3D Ill-posed Inverse
Problems [7.074380879971194]
We propose a novel two-and-a-half order score-based model (TOSM) for 3D volumetric reconstruction.
During the training phase, our TOSM learns data distributions in 2D space, which reduces the complexity of training.
In the reconstruction phase, the TOSM updates the data distribution in 3D space, utilizing complementary scores along three directions.
arXiv Detail & Related papers (2023-08-16T17:07:40Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Stabilizing Deep Tomographic Reconstruction [25.179542326326896]
We propose an Analytic Compressed Iterative Deep (ACID) framework to address this challenge.
ACID synergizes a deep reconstruction network trained on big data, kernel awareness from CS-inspired processing, and iterative refinement.
Our study demonstrates that the deep reconstruction using ACID is accurate and stable, and sheds light on the converging mechanism of the ACID iteration.
arXiv Detail & Related papers (2020-08-04T21:35:32Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.