Perspective Projection-Based 3D CT Reconstruction from Biplanar X-rays
- URL: http://arxiv.org/abs/2303.05297v1
- Date: Thu, 9 Mar 2023 14:45:25 GMT
- Title: Perspective Projection-Based 3D CT Reconstruction from Biplanar X-rays
- Authors: Daeun Kyung, Kyungmin Jo, Jaegul Choo, Joonseok Lee, Edward Choi
- Abstract summary: We propose PerX2CT, a novel CT reconstruction framework from X-ray.
Our proposed method provides a different combination of features for each coordinate which implicitly allows the model to obtain information about the 3D location.
- Score: 32.98966469644061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: X-ray computed tomography (CT) is one of the most common imaging techniques
used to diagnose various diseases in the medical field. Its high contrast
sensitivity and spatial resolution allow the physician to observe details of
body parts such as bones, soft tissue, blood vessels, etc. As it involves
potentially harmful radiation exposure to patients and surgeons, however,
reconstructing 3D CT volume from perpendicular 2D X-ray images is considered a
promising alternative, thanks to its lower radiation risk and better
accessibility. This is highly challenging though, since it requires
reconstruction of 3D anatomical information from 2D images with limited views,
where all the information is overlapped. In this paper, we propose PerX2CT, a
novel CT reconstruction framework from X-ray that reflects the perspective
projection scheme. Our proposed method provides a different combination of
features for each coordinate which implicitly allows the model to obtain
information about the 3D location. We reveal the potential to reconstruct the
selected part of CT with high resolution by properly using the coordinate-wise
local and global features. Our approach shows potential for use in clinical
applications with low computational complexity and fast inference time,
demonstrating superior performance than baselines in multiple evaluation
metrics.
Related papers
- DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays [41.393567374399524]
We propose DiffuX2CT, which models CT reconstruction from ultra-sparse X-rays as a conditional diffusion process.
By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays.
As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays.
arXiv Detail & Related papers (2024-07-18T14:20:04Z) - X-ray2CTPA: Generating 3D CTPA scans from 2D X-ray conditioning [24.233484690096898]
Chest X-rays or chest radiography (CXR) enables limited imaging compared to computed tomography (CT) scans.
CT scans entail higher costs, greater radiation exposure, and are less accessible than CXRs.
In this work we explore cross-modal translation from a 2D low contrast-resolution X-ray input to a 3D high contrast and spatial-resolutionA scan.
arXiv Detail & Related papers (2024-06-23T13:53:35Z) - SdCT-GAN: Reconstructing CT from Biplanar X-Rays with Self-driven
Generative Adversarial Networks [6.624839896733912]
This paper presents a new self-driven generative adversarial network model (SdCT-GAN) for reconstruction of 3D CT images.
It is motivated to pay more attention to image details by introducing a novel auto-encoder structure in the discriminator.
LPIPS evaluation metric is adopted that can quantitatively evaluate the fine contours and textures of reconstructed images better than the existing ones.
arXiv Detail & Related papers (2023-09-10T08:16:02Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - XTransCT: Ultra-Fast Volumetric CT Reconstruction using Two Orthogonal
X-Ray Projections for Image-guided Radiation Therapy via a Transformer
Network [8.966238080182263]
We introduce a novel Transformer architecture, termed XTransCT, to facilitate real-time reconstruction of CT images from two-dimensional X-ray images.
Our findings indicate that our algorithm surpasses other methods in image quality, structural precision, and generalizability.
In comparison to previous 3D convolution-based approaches, we note a substantial speed increase of approximately 300 %, achieving 44 ms per 3D image reconstruction.
arXiv Detail & Related papers (2023-05-31T07:41:10Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - X-Ray2EM: Uncertainty-Aware Cross-Modality Image Reconstruction from
X-Ray to Electron Microscopy in Connectomics [55.6985304397137]
We propose an uncertainty-aware 3D reconstruction model that translates X-ray images to EM-like images with enhanced membrane segmentation quality.
This shows its potential for developing simpler, faster, and more accurate X-ray based connectomics pipelines.
arXiv Detail & Related papers (2023-03-02T00:52:41Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.