XProspeCT: CT Volume Generation from Paired X-Rays
- URL: http://arxiv.org/abs/2403.00771v1
- Date: Sun, 11 Feb 2024 21:57:49 GMT
- Title: XProspeCT: CT Volume Generation from Paired X-Rays
- Authors: Benjamin Paulson, Joshua Goldshteyn, Sydney Balboni, John Cisler,
Andrew Crisler, Natalia Bukowski, Julia Kalish, Theodore Colwell
- Abstract summary: We build on previous research to convert X-ray images into simulated CT volumes.
Model variations include UNet architectures, custom connections, activation functions, loss functions, and a novel back projection approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computed tomography (CT) is a beneficial imaging tool for diagnostic
purposes. CT scans provide detailed information concerning the internal
anatomic structures of a patient, but present higher radiation dose and costs
compared to X-ray imaging. In this paper, we build on previous research to
convert orthogonal X-ray images into simulated CT volumes by exploring larger
datasets and various model structures. Significant model variations include
UNet architectures, custom connections, activation functions, loss functions,
optimizers, and a novel back projection approach.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays [41.393567374399524]
We propose DiffuX2CT, which models CT reconstruction from ultra-sparse X-rays as a conditional diffusion process.
By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays.
As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays.
arXiv Detail & Related papers (2024-07-18T14:20:04Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - UMedNeRF: Uncertainty-aware Single View Volumetric Rendering for Medical
Neural Radiance Fields [38.62191342903111]
We propose an Uncertainty-aware MedNeRF (UMedNeRF) network based on generated radiation fields.
We show the results of CT projection rendering with a single X-ray and compare our method with other methods based on generated radiation fields.
arXiv Detail & Related papers (2023-11-10T02:47:15Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Improving Computed Tomography (CT) Reconstruction via 3D Shape Induction [3.1498833540989413]
We propose shape induction, that is, learning the shape of 3D CT from X-ray without CT supervision, as a novel technique to incorporate realistic X-ray distributions during training of a reconstruction model.
Our experiments demonstrate that this process improves both the perceptual quality of generated CT and the accuracy of down-stream classification of pulmonary infectious diseases.
arXiv Detail & Related papers (2022-08-23T13:06:02Z) - Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule
Augmentation and Detection [52.93342510469636]
Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers.
Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR.
To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation.
arXiv Detail & Related papers (2022-07-19T16:38:48Z) - Context-Aware Transformers For Spinal Cancer Detection and Radiological
Grading [70.04389979779195]
This paper proposes a novel transformer-based model architecture for medical imaging problems involving analysis of vertebrae.
It considers two applications of such models in MR images: (a) detection of spinal metastases and the related conditions of vertebral fractures and metastatic cord compression.
We show that by considering the context of vertebral bodies in the image, SCT improves the accuracy for several gradings compared to previously published model.
arXiv Detail & Related papers (2022-06-27T10:31:03Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z) - X-ray Dissectography Enables Stereotography to Improve Diagnostic
Performance [5.357314252311141]
We propose "x-ray dissectography" to extract a target organ/tissue digitally from few radiographic projections.
Experiments show that x-ray stereography can be achieved of an isolated organ such as the lungs.
x-ray dissectography promises to be a new x-ray imaging modality for CT-grade diagnosis at radiation dose and system cost comparable to that of radiographic or tomosynthetic imaging.
arXiv Detail & Related papers (2021-11-30T00:31:59Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.