Reconstructing Quantitative Cerebral Perfusion Images Directly From Measured Sinogram Data Acquired Using C-arm Cone-Beam CT
- URL: http://arxiv.org/abs/2412.05084v2
- Date: Wed, 25 Dec 2024 02:12:02 GMT
- Title: Reconstructing Quantitative Cerebral Perfusion Images Directly From Measured Sinogram Data Acquired Using C-arm Cone-Beam CT
- Authors: Haotian Zhao, Ruifeng Chen, Jing Yan, Juan Feng, Jun Xiang, Yang Chen, Dong Liang, Yinsheng Li,
- Abstract summary: Current quantitative perfusion imaging includes two cascaded steps: time-resolved image reconstruction and perfusion parametric estimation.
These two challenges together prevent obtaining quantitatively accurate perfusion images using C-arm CBCT.
In the developed direct cerebral perfusion parametric image reconstruction technique, TRAINER in short, the quantitative perfusion images have been represented as a subject-specific conditional generative model trained under the constraint of the time-resolved CT forward model.
- Score: 11.97193686553776
- License:
- Abstract: To shorten the door-to-puncture time for better treating patients with acute ischemic stroke, it is highly desired to obtain quantitative cerebral perfusion images using C-arm cone-beam computed tomography (CBCT) equipped in the interventional suite. However, limited by the slow gantry rotation speed, the temporal resolution and temporal sampling density of typical C-arm CBCT are much poorer than those of multi-detector-row CT in the diagnostic imaging suite. The current quantitative perfusion imaging includes two cascaded steps: time-resolved image reconstruction and perfusion parametric estimation. For time-resolved image reconstruction, the technical challenge imposed by poor temporal resolution and poor sampling density causes inaccurate quantification of the temporal variation of cerebral artery and tissue attenuation values. For perfusion parametric estimation, it remains a technical challenge to appropriately design the handcrafted regularization for better solving the associated deconvolution problem. These two challenges together prevent obtaining quantitatively accurate perfusion images using C-arm CBCT. The purpose of this work is to simultaneously address these two challenges by combining the two cascaded steps into a single joint optimization problem and reconstructing quantitative perfusion images directly from the measured sinogram data. In the developed direct cerebral perfusion parametric image reconstruction technique, TRAINER in short, the quantitative perfusion images have been represented as a subject-specific conditional generative model trained under the constraint of the time-resolved CT forward model, perfusion convolutional model, and the subject's own measured sinogram data. Results shown in this paper demonstrated that using TRAINER, quantitative cerebral perfusion images can be accurately obtained using C-arm CBCT in the interventional suite.
Related papers
- Improving Cone-Beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings [6.157230849293829]
Daily cone-beam CT (CBCT) imaging, pivotal for therapy adjustment, falls short in tissue density accuracy.
We maximize CBCT data during therapy, complemented by sparse paired fan-beam CTs.
Our approach shows promise in generating high-quality CT images from CBCT scans in RT.
arXiv Detail & Related papers (2024-09-19T07:56:06Z) - Sequential-Scanning Dual-Energy CT Imaging Using High Temporal Resolution Image Reconstruction and Error-Compensated Material Basis Image Generation [6.361772490498643]
We developed sequential-scanning imaging using high temporal resolution image reconstruction and error-compensated material basis image generation.
Results demonstrated the improvement of quantification accuracy and image quality using ACCELERATION.
arXiv Detail & Related papers (2024-08-27T03:09:39Z) - Prior Frequency Guided Diffusion Model for Limited Angle (LA)-CBCT Reconstruction [2.960150120524893]
Cone-beam computed tomography (CBCT) is widely used in image-guided radiotherapy.
LA-CBCT reconstruction suffers from severe under-sampling artifacts, making it a highly ill-posed inverse problem.
We developed a diffusion model-based framework, prior frequency-guided diffusion model (PFGDM) for robust and structure-preserving LA-CBCT reconstruction.
arXiv Detail & Related papers (2024-04-01T19:41:33Z) - Step-Calibrated Diffusion for Biomedical Optical Image Restoration [47.191704042917394]
Restorative Step-Calibrated Diffusion (RSCD) is an unpaired diffusion-based image restoration method.
RSCD outperforms other widely used unpaired image restoration methods on both image quality and perceptual evaluation.
RSCD improves performance on downstream clinical imaging tasks, including automated brain tumor diagnosis and deep tissue imaging.
arXiv Detail & Related papers (2024-03-20T15:38:53Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Parallel Diffusion Model-based Sparse-view Cone-beam Breast CT [7.712142153700843]
We transform the cutting-edge Denoising Diffusion Probabilistic Model (DDPM) into a parallel framework for sub-volume-based sparse-view breast CT image reconstruction.
Our experimental findings reveal that this method delivers competitive reconstruction performance at half to one-third of the standard radiation doses.
arXiv Detail & Related papers (2023-03-22T18:55:43Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Efficient Learning and Decoding of the Continuous-Time Hidden Markov
Model for Disease Progression Modeling [119.50438407358862]
We present the first complete characterization of efficient EM-based learning methods for CT-HMM models.
We show that EM-based learning consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics.
We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer's disease dataset.
arXiv Detail & Related papers (2021-10-26T20:06:05Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - A Learning-based Method for Online Adjustment of C-arm Cone-Beam CT
Source Trajectories for Artifact Avoidance [47.345403652324514]
The reconstruction quality attainable with commercial CBCT devices is insufficient due to metal artifacts in the presence of pedicle screws.
We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task.
We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory.
arXiv Detail & Related papers (2020-08-14T09:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.