C^2RV: Cross-Regional and Cross-View Learning for Sparse-View CBCT Reconstruction
- URL: http://arxiv.org/abs/2406.03902v1
- Date: Thu, 6 Jun 2024 09:37:56 GMT
- Title: C^2RV: Cross-Regional and Cross-View Learning for Sparse-View CBCT Reconstruction
- Authors: Yiqun Lin, Jiewen Yang, Hualiang Wang, Xinpeng Ding, Wei Zhao, Xiaomeng Li,
- Abstract summary: Cone beam computed tomography (CBCT) is an important imaging technology widely used in medical scenarios.
CBCT reconstruction is more challenging due to the increased dimensionality caused by the measurement process based on cone-shaped X-ray beams.
We propose C2RV by leveraging explicit multi-scale volumetric representations to enable cross-regional learning in the 3D space.
- Score: 17.54830070112685
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cone beam computed tomography (CBCT) is an important imaging technology widely used in medical scenarios, such as diagnosis and preoperative planning. Using fewer projection views to reconstruct CT, also known as sparse-view reconstruction, can reduce ionizing radiation and further benefit interventional radiology. Compared with sparse-view reconstruction for traditional parallel/fan-beam CT, CBCT reconstruction is more challenging due to the increased dimensionality caused by the measurement process based on cone-shaped X-ray beams. As a 2D-to-3D reconstruction problem, although implicit neural representations have been introduced to enable efficient training, only local features are considered and different views are processed equally in previous works, resulting in spatial inconsistency and poor performance on complicated anatomies. To this end, we propose C^2RV by leveraging explicit multi-scale volumetric representations to enable cross-regional learning in the 3D space. Additionally, the scale-view cross-attention module is introduced to adaptively aggregate multi-scale and multi-view features. Extensive experiments demonstrate that our C^2RV achieves consistent and significant improvement over previous state-of-the-art methods on datasets with diverse anatomy.
Related papers
- Reconstruct Spine CT from Biplanar X-Rays via Diffusion Learning [26.866131691476255]
Intraoperative CT imaging serves as a crucial resource for surgical guidance; however, it may not always be readily accessible or practical to implement.
In this paper, we introduce an innovative method for 3D CT reconstruction utilizing biplanar X-rays.
arXiv Detail & Related papers (2024-08-19T06:34:01Z) - Learning 3D Gaussians for Extremely Sparse-View Cone-Beam CT Reconstruction [9.848266253196307]
Cone-Beam Computed Tomography (CBCT) is an indispensable technique in medical imaging, yet the associated radiation exposure raises concerns in clinical practice.
We propose a novel reconstruction framework, namely DIF-Gaussian, which leverages 3D Gaussians to represent the feature distribution in the 3D space.
We evaluate DIF-Gaussian on two public datasets, showing significantly superior reconstruction performance than previous state-of-the-art methods.
arXiv Detail & Related papers (2024-07-01T08:48:04Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - XTransCT: Ultra-Fast Volumetric CT Reconstruction using Two Orthogonal
X-Ray Projections for Image-guided Radiation Therapy via a Transformer
Network [8.966238080182263]
We introduce a novel Transformer architecture, termed XTransCT, to facilitate real-time reconstruction of CT images from two-dimensional X-ray images.
Our findings indicate that our algorithm surpasses other methods in image quality, structural precision, and generalizability.
In comparison to previous 3D convolution-based approaches, we note a substantial speed increase of approximately 300 %, achieving 44 ms per 3D image reconstruction.
arXiv Detail & Related papers (2023-05-31T07:41:10Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Perspective Projection-Based 3D CT Reconstruction from Biplanar X-rays [32.98966469644061]
We propose PerX2CT, a novel CT reconstruction framework from X-ray.
Our proposed method provides a different combination of features for each coordinate which implicitly allows the model to obtain information about the 3D location.
arXiv Detail & Related papers (2023-03-09T14:45:25Z) - SNAF: Sparse-view CBCT Reconstruction with Neural Attenuation Fields [71.84366290195487]
We propose SNAF for sparse-view CBCT reconstruction by learning the neural attenuation fields.
Our approach achieves superior performance in terms of high reconstruction quality (30+ PSNR) with only 20 input views.
arXiv Detail & Related papers (2022-11-30T14:51:14Z) - REGAS: REspiratory-GAted Synthesis of Views for Multi-Phase CBCT
Reconstruction from a single 3D CBCT Acquisition [75.64791080418162]
REGAS proposes a self-supervised method to synthesize the undersampled tomographic views and mitigate aliasing artifacts in reconstructed images.
To address the large memory cost of deep neural networks on high resolution 4D data, REGAS introduces a novel Ray Path Transformation (RPT) that allows for distributed, differentiable forward projections.
arXiv Detail & Related papers (2022-08-17T03:42:19Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.