TiAVox: Time-aware Attenuation Voxels for Sparse-view 4D DSA
Reconstruction
- URL: http://arxiv.org/abs/2309.02318v2
- Date: Tue, 19 Dec 2023 08:20:43 GMT
- Title: TiAVox: Time-aware Attenuation Voxels for Sparse-view 4D DSA
Reconstruction
- Authors: Zhenghong Zhou, Huangxuan Zhao, Jiemin Fang, Dongqiao Xiang, Lei Chen,
Lingxia Wu, Feihong Wu, Wenyu Liu, Chuansheng Zheng and Xinggang Wang
- Abstract summary: We propose a Time-aware Attenuation Voxel (TiAVox) approach for sparse-view 4D DSA reconstruction.
TiAVox introduces 4D attenuation voxel grids, which reflect attenuation properties from both spatial and temporal dimensions.
We validated the TiAVox approach on both clinical and simulated datasets.
- Score: 34.1903749611458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Four-dimensional Digital Subtraction Angiography (4D DSA) plays a critical
role in the diagnosis of many medical diseases, such as Arteriovenous
Malformations (AVM) and Arteriovenous Fistulas (AVF). Despite its significant
application value, the reconstruction of 4D DSA demands numerous views to
effectively model the intricate vessels and radiocontrast flow, thereby
implying a significant radiation dose. To address this high radiation issue, we
propose a Time-aware Attenuation Voxel (TiAVox) approach for sparse-view 4D DSA
reconstruction, which paves the way for high-quality 4D imaging. Additionally,
2D and 3D DSA imaging results can be generated from the reconstructed 4D DSA
images. TiAVox introduces 4D attenuation voxel grids, which reflect attenuation
properties from both spatial and temporal dimensions. It is optimized by
minimizing discrepancies between the rendered images and sparse 2D DSA images.
Without any neural network involved, TiAVox enjoys specific physical
interpretability. The parameters of each learnable voxel represent the
attenuation coefficients. We validated the TiAVox approach on both clinical and
simulated datasets, achieving a 31.23 Peak Signal-to-Noise Ratio (PSNR) for
novel view synthesis using only 30 views on the clinically sourced dataset,
whereas traditional Feldkamp-Davis-Kress methods required 133 views. Similarly,
with merely 10 views from the synthetic dataset, TiAVox yielded a PSNR of 34.32
for novel view synthesis and 41.40 for 3D reconstruction. We also executed
ablation studies to corroborate the essential components of TiAVox. The code
will be publically available.
Related papers
- 4DRGS: 4D Radiative Gaussian Splatting for Efficient 3D Vessel Reconstruction from Sparse-View Dynamic DSA Images [49.170407434313475]
Existing methods often produce suboptimal results or require excessive computation time.
We propose 4D radiative Gaussian splatting (4DRGS) to achieve high-quality reconstruction efficiently.
4DRGS achieves impressive results in 5 minutes training, which is 32x faster than the state-of-the-art method.
arXiv Detail & Related papers (2024-12-17T13:51:56Z) - TomoGRAF: A Robust and Generalizable Reconstruction Network for Single-View Computed Tomography [3.1209855614927275]
Traditional analytical/iterative CT reconstruction algorithms require hundreds of angular data samplings.
We develop a novel TomoGRAF framework incorporating the unique X-ray transportation physics to reconstruct high-quality 3D volumes.
arXiv Detail & Related papers (2024-11-12T20:07:59Z) - FCDM: Sparse-view Sinogram Inpainting with Frequency Domain Convolution Enhanced Diffusion Models [14.043383277622874]
We introduce a novel diffusion-based inpainting framework tailored for sinogram data.
FCDM significantly outperforms existing methods, achieving SSIM over 0.95 and PSNR above 30 dB, with improvements of up to 33% in SSIM and 29% in PSNR compared to baselines.
arXiv Detail & Related papers (2024-08-26T12:31:38Z) - EAR: Edge-Aware Reconstruction of 3-D vertebrae structures from bi-planar X-ray images [19.902946440205966]
We propose a new Edge-Aware Reconstruction network (EAR) to focus on the performance improvement of the edge information and vertebrae shapes.
By using the auto-encoder architecture as the backbone, the edge attention module and frequency enhancement module are proposed to strengthen the perception of the edge reconstruction.
The proposed method is evaluated using three publicly accessible datasets and compared with four state-of-the-art models.
arXiv Detail & Related papers (2024-07-30T16:19:14Z) - 3D Vessel Reconstruction from Sparse-View Dynamic DSA Images via Vessel Probability Guided Attenuation Learning [79.60829508459753]
Current commercial Digital Subtraction Angiography (DSA) systems typically demand hundreds of scanning views to perform reconstruction.
The dynamic blood flow and insufficient input of sparse-view DSA images present significant challenges to the 3D vessel reconstruction task.
We propose to use a time-agnostic vessel probability field to solve this problem effectively.
arXiv Detail & Related papers (2024-05-17T11:23:33Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - REGAS: REspiratory-GAted Synthesis of Views for Multi-Phase CBCT
Reconstruction from a single 3D CBCT Acquisition [75.64791080418162]
REGAS proposes a self-supervised method to synthesize the undersampled tomographic views and mitigate aliasing artifacts in reconstructed images.
To address the large memory cost of deep neural networks on high resolution 4D data, REGAS introduces a novel Ray Path Transformation (RPT) that allows for distributed, differentiable forward projections.
arXiv Detail & Related papers (2022-08-17T03:42:19Z) - 4D Spatio-Temporal Convolutional Networks for Object Position Estimation
in OCT Volumes [69.62333053044712]
3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single OCT images.
We extend 3D CNNs to 4D-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking.
arXiv Detail & Related papers (2020-07-02T12:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.