EAR: Edge-Aware Reconstruction of 3-D vertebrae structures from bi-planar X-ray images
- URL: http://arxiv.org/abs/2407.20937v2
- Date: Mon, 5 Aug 2024 01:27:39 GMT
- Title: EAR: Edge-Aware Reconstruction of 3-D vertebrae structures from bi-planar X-ray images
- Authors: Lixing Tan, Shuang Song, Yaofeng He, Kangneng Zhou, Tong Lu, Ruoxiu Xiao,
- Abstract summary: We propose a new Edge-Aware Reconstruction network (EAR) to focus on the performance improvement of the edge information and vertebrae shapes.
By using the auto-encoder architecture as the backbone, the edge attention module and frequency enhancement module are proposed to strengthen the perception of the edge reconstruction.
The proposed method is evaluated using three publicly accessible datasets and compared with four state-of-the-art models.
- Score: 19.902946440205966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: X-ray images ease the diagnosis and treatment process due to their rapid imaging speed and high resolution. However, due to the projection process of X-ray imaging, much spatial information has been lost. To accurately provide efficient spinal morphological and structural information, reconstructing the 3-D structures of the spine from the 2-D X-ray images is essential. It is challenging for current reconstruction methods to preserve the edge information and local shapes of the asymmetrical vertebrae structures. In this study, we propose a new Edge-Aware Reconstruction network (EAR) to focus on the performance improvement of the edge information and vertebrae shapes. In our network, by using the auto-encoder architecture as the backbone, the edge attention module and frequency enhancement module are proposed to strengthen the perception of the edge reconstruction. Meanwhile, we also combine four loss terms, including reconstruction loss, edge loss, frequency loss and projection loss. The proposed method is evaluated using three publicly accessible datasets and compared with four state-of-the-art models. The proposed method is superior to other methods and achieves 25.32%, 15.32%, 86.44%, 80.13%, 23.7612 and 0.3014 with regard to MSE, MAE, Dice, SSIM, PSNR and frequency distance. Due to the end-to-end and accurate reconstruction process, EAR can provide sufficient 3-D spatial information and precise preoperative surgical planning guidance.
Related papers
- 4DRGS: 4D Radiative Gaussian Splatting for Efficient 3D Vessel Reconstruction from Sparse-View Dynamic DSA Images [49.170407434313475]
Existing methods often produce suboptimal results or require excessive computation time.
We propose 4D radiative Gaussian splatting (4DRGS) to achieve high-quality reconstruction efficiently.
4DRGS achieves impressive results in 5 minutes training, which is 32x faster than the state-of-the-art method.
arXiv Detail & Related papers (2024-12-17T13:51:56Z) - TomoGRAF: A Robust and Generalizable Reconstruction Network for Single-View Computed Tomography [3.1209855614927275]
Traditional analytical/iterative CT reconstruction algorithms require hundreds of angular data samplings.
We develop a novel TomoGRAF framework incorporating the unique X-ray transportation physics to reconstruct high-quality 3D volumes.
arXiv Detail & Related papers (2024-11-12T20:07:59Z) - SdCT-GAN: Reconstructing CT from Biplanar X-Rays with Self-driven
Generative Adversarial Networks [6.624839896733912]
This paper presents a new self-driven generative adversarial network model (SdCT-GAN) for reconstruction of 3D CT images.
It is motivated to pay more attention to image details by introducing a novel auto-encoder structure in the discriminator.
LPIPS evaluation metric is adopted that can quantitatively evaluate the fine contours and textures of reconstructed images better than the existing ones.
arXiv Detail & Related papers (2023-09-10T08:16:02Z) - XTransCT: Ultra-Fast Volumetric CT Reconstruction using Two Orthogonal
X-Ray Projections for Image-guided Radiation Therapy via a Transformer
Network [8.966238080182263]
We introduce a novel Transformer architecture, termed XTransCT, to facilitate real-time reconstruction of CT images from two-dimensional X-ray images.
Our findings indicate that our algorithm surpasses other methods in image quality, structural precision, and generalizability.
In comparison to previous 3D convolution-based approaches, we note a substantial speed increase of approximately 300 %, achieving 44 ms per 3D image reconstruction.
arXiv Detail & Related papers (2023-05-31T07:41:10Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Perspective Projection-Based 3D CT Reconstruction from Biplanar X-rays [32.98966469644061]
We propose PerX2CT, a novel CT reconstruction framework from X-ray.
Our proposed method provides a different combination of features for each coordinate which implicitly allows the model to obtain information about the 3D location.
arXiv Detail & Related papers (2023-03-09T14:45:25Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - End-To-End Convolutional Neural Network for 3D Reconstruction of Knee
Bones From Bi-Planar X-Ray Images [6.645111950779666]
We present an end-to-end Convolutional Neural Network (CNN) approach for 3D reconstruction of knee bones directly from two bi-planar X-ray images.
arXiv Detail & Related papers (2020-04-02T08:37:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.