Improving Computed Tomography (CT) Reconstruction via 3D Shape Induction
- URL: http://arxiv.org/abs/2208.10937v1
- Date: Tue, 23 Aug 2022 13:06:02 GMT
- Title: Improving Computed Tomography (CT) Reconstruction via 3D Shape Induction
- Authors: Elena Sizikova, Xu Cao, Ashia Lewis, Kenny Moise, Megan Coffee
- Abstract summary: We propose shape induction, that is, learning the shape of 3D CT from X-ray without CT supervision, as a novel technique to incorporate realistic X-ray distributions during training of a reconstruction model.
Our experiments demonstrate that this process improves both the perceptual quality of generated CT and the accuracy of down-stream classification of pulmonary infectious diseases.
- Score: 3.1498833540989413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chest computed tomography (CT) imaging adds valuable insight in the diagnosis
and management of pulmonary infectious diseases, like tuberculosis (TB).
However, due to the cost and resource limitations, only X-ray images may be
available for initial diagnosis or follow up comparison imaging during
treatment. Due to their projective nature, X-rays images may be more difficult
to interpret by clinicians. The lack of publicly available paired X-ray and CT
image datasets makes it challenging to train a 3D reconstruction model. In
addition, Chest X-ray radiology may rely on different device modalities with
varying image quality and there may be variation in underlying population
disease spectrum that creates diversity in inputs. We propose shape induction,
that is, learning the shape of 3D CT from X-ray without CT supervision, as a
novel technique to incorporate realistic X-ray distributions during training of
a reconstruction model. Our experiments demonstrate that this process improves
both the perceptual quality of generated CT and the accuracy of down-stream
classification of pulmonary infectious diseases.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays [41.393567374399524]
We propose DiffuX2CT, which models CT reconstruction from ultra-sparse X-rays as a conditional diffusion process.
By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays.
As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays.
arXiv Detail & Related papers (2024-07-18T14:20:04Z) - XProspeCT: CT Volume Generation from Paired X-Rays [0.0]
We build on previous research to convert X-ray images into simulated CT volumes.
Model variations include UNet architectures, custom connections, activation functions, loss functions, and a novel back projection approach.
arXiv Detail & Related papers (2024-02-11T21:57:49Z) - UMedNeRF: Uncertainty-aware Single View Volumetric Rendering for Medical
Neural Radiance Fields [38.62191342903111]
We propose an Uncertainty-aware MedNeRF (UMedNeRF) network based on generated radiation fields.
We show the results of CT projection rendering with a single X-ray and compare our method with other methods based on generated radiation fields.
arXiv Detail & Related papers (2023-11-10T02:47:15Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule
Augmentation and Detection [52.93342510469636]
Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers.
Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR.
To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation.
arXiv Detail & Related papers (2022-07-19T16:38:48Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z) - Improving Tuberculosis (TB) Prediction using Synthetically Generated
Computed Tomography (CT) Images [0.17499351967216337]
Pulmonary infections can often be best imaged and evaluated through computed tomography (CT) scans.
X-ray, a different type of imaging procedure, is inexpensive, often available at the bedside and more widely available, but offers a simpler, two dimensional image.
We show that by relying on a model that learns to generate CT images from X-rays synthetically, we can improve the automatic disease classification accuracy.
arXiv Detail & Related papers (2021-09-23T16:35:15Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.