Transformers for CT Reconstruction From Monoplanar and Biplanar
Radiographs
- URL: http://arxiv.org/abs/2305.06965v1
- Date: Thu, 11 May 2023 16:43:39 GMT
- Title: Transformers for CT Reconstruction From Monoplanar and Biplanar
Radiographs
- Authors: Firas Khader, Gustav M\"uller-Franzes, Tianyu Han, Sven Nebelung,
Christiane Kuhl, Johannes Stegmaier, Daniel Truhn
- Abstract summary: We tackle the problem of reconstructing CT images from biplanar x-rays only.
X-rays are widely available and even if the CT reconstructed from these radiographs is not a replacement of a complete CT in the diagnostic setting, it might serve to spare the patients from radiation.
We propose a novel method based on the transformer architecture, by framing the underlying task as a language translation problem.
- Score: 0.11219061154635457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computed Tomography (CT) scans provide detailed and accurate information of
internal structures in the body. They are constructed by sending x-rays through
the body from different directions and combining this information into a
three-dimensional volume. Such volumes can then be used to diagnose a wide
range of conditions and allow for volumetric measurements of organs. In this
work, we tackle the problem of reconstructing CT images from biplanar x-rays
only. X-rays are widely available and even if the CT reconstructed from these
radiographs is not a replacement of a complete CT in the diagnostic setting, it
might serve to spare the patients from radiation where a CT is only acquired
for rough measurements such as determining organ size. We propose a novel
method based on the transformer architecture, by framing the underlying task as
a language translation problem. Radiographs and CT images are first embedded
into latent quantized codebook vectors using two different autoencoder
networks. We then train a GPT model, to reconstruct the codebook vectors of the
CT image, conditioned on the codebook vectors of the x-rays and show that this
approach leads to realistic looking images. To encourage further research in
this direction, we make our code publicly available on GitHub: XXX.
Related papers
- Coarse-Fine View Attention Alignment-Based GAN for CT Reconstruction from Biplanar X-Rays [22.136553745483305]
We propose a novel attention-informed coarse-to-fine cross-view fusion method to combine the features extracted from the biplanar views.
Experiments have demonstrated the superiority of our proposed method over the SOTA methods.
arXiv Detail & Related papers (2024-08-19T06:57:07Z) - DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays [41.393567374399524]
We propose DiffuX2CT, which models CT reconstruction from ultra-sparse X-rays as a conditional diffusion process.
By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays.
As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays.
arXiv Detail & Related papers (2024-07-18T14:20:04Z) - CT Reconstruction from Few Planar X-rays with Application towards
Low-resource Radiotherapy [20.353246282326943]
We propose a method to generate CT volumes from few (5) planar X-ray observations using a prior data distribution.
To focus the generation task on clinically-relevant features, our model can also leverage anatomical guidance during training.
Our method is better than recent sparse CT reconstruction baselines in terms of standard pixel and structure-level metrics.
arXiv Detail & Related papers (2023-08-04T01:17:57Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Context-Aware Transformers For Spinal Cancer Detection and Radiological
Grading [70.04389979779195]
This paper proposes a novel transformer-based model architecture for medical imaging problems involving analysis of vertebrae.
It considers two applications of such models in MR images: (a) detection of spinal metastases and the related conditions of vertebral fractures and metastatic cord compression.
We show that by considering the context of vertebral bodies in the image, SCT improves the accuracy for several gradings compared to previously published model.
arXiv Detail & Related papers (2022-06-27T10:31:03Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.