Geometry-Aware Attenuation Field Learning for Sparse-View CBCT
Reconstruction
- URL: http://arxiv.org/abs/2303.14739v1
- Date: Sun, 26 Mar 2023 14:38:42 GMT
- Title: Geometry-Aware Attenuation Field Learning for Sparse-View CBCT
Reconstruction
- Authors: Zhentao Liu, Yu Fang, Changjian Li, Han Wu, Yuan Liu, Zhiming Cui,
Dinggang Shen
- Abstract summary: Cone Beam Computed Tomography (CBCT) is the most widely used imaging method in dentistry.
sparse-view CBCT reconstruction has become a main focus to reduce radiation dose.
This paper proposes a novel attenuation field encoder-decoder framework by first encoding the volumetric feature from multi-view X-ray projections.
- Score: 61.48254686722434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cone Beam Computed Tomography (CBCT) is the most widely used imaging method
in dentistry. As hundreds of X-ray projections are needed to reconstruct a
high-quality CBCT image (i.e., the attenuation field) in traditional
algorithms, sparse-view CBCT reconstruction has become a main focus to reduce
radiation dose. Several attempts have been made to solve it while still
suffering from insufficient data or poor generalization ability for novel
patients. This paper proposes a novel attenuation field encoder-decoder
framework by first encoding the volumetric feature from multi-view X-ray
projections, then decoding it into the desired attenuation field. The key
insight is when building the volumetric feature, we comply with the multi-view
CBCT reconstruction nature and emphasize the view consistency property by
geometry-aware spatial feature querying and adaptive feature fusing. Moreover,
the prior knowledge information learned from data population guarantees our
generalization ability when dealing with sparse view input. Comprehensive
evaluations have demonstrated the superiority in terms of reconstruction
quality, and the downstream application further validates the feasibility of
our method in real-world clinics.
Related papers
- CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - C^2RV: Cross-Regional and Cross-View Learning for Sparse-View CBCT Reconstruction [17.54830070112685]
Cone beam computed tomography (CBCT) is an important imaging technology widely used in medical scenarios.
CBCT reconstruction is more challenging due to the increased dimensionality caused by the measurement process based on cone-shaped X-ray beams.
We propose C2RV by leveraging explicit multi-scale volumetric representations to enable cross-regional learning in the 3D space.
arXiv Detail & Related papers (2024-06-06T09:37:56Z) - Fast and accurate sparse-view CBCT reconstruction using meta-learned
neural attenuation field and hash-encoding regularization [13.01191568245715]
Cone beam computed tomography (CBCT) is an emerging medical imaging technique to visualize the internal anatomical structures of patients.
reducing the number of projections in a CBCT scan while preserving the quality of a reconstructed image is challenging.
We propose a fast and accurate sparse-view CBCT reconstruction (FACT) method to provide better reconstruction quality and faster optimization speed.
arXiv Detail & Related papers (2023-12-04T07:23:44Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - SNAF: Sparse-view CBCT Reconstruction with Neural Attenuation Fields [71.84366290195487]
We propose SNAF for sparse-view CBCT reconstruction by learning the neural attenuation fields.
Our approach achieves superior performance in terms of high reconstruction quality (30+ PSNR) with only 20 input views.
arXiv Detail & Related papers (2022-11-30T14:51:14Z) - Computed Tomography Reconstruction using Generative Energy-Based Priors [13.634603375405744]
We learn a parametric regularizer with a global receptive field by maximizing it's likelihood on reference CT data.
We apply the regularizer to limited-angle and few-view CT reconstruction problems, where it outperforms traditional reconstruction algorithms by a large margin.
arXiv Detail & Related papers (2022-03-23T18:26:23Z) - DuDoTrans: Dual-Domain Transformer Provides More Attention for Sinogram
Restoration in Sparse-View CT Reconstruction [13.358197688568463]
iodine radiation in the imaging process induces irreversible injury.
Iterative models are proposed to alleviate the appeared artifacts in sparse-view CT images, but the cost is too expensive.
We propose textbfDual-textbfDomain textbfDuDoTrans to reconstruct CT image with both the enhanced and raw sinograms.
arXiv Detail & Related papers (2021-11-21T10:41:07Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.