X-ray Dissectography Enables Stereotography to Improve Diagnostic
Performance
- URL: http://arxiv.org/abs/2111.15040v1
- Date: Tue, 30 Nov 2021 00:31:59 GMT
- Title: X-ray Dissectography Enables Stereotography to Improve Diagnostic
Performance
- Authors: Chuang Niu and Ge Wang
- Abstract summary: We propose "x-ray dissectography" to extract a target organ/tissue digitally from few radiographic projections.
Experiments show that x-ray stereography can be achieved of an isolated organ such as the lungs.
x-ray dissectography promises to be a new x-ray imaging modality for CT-grade diagnosis at radiation dose and system cost comparable to that of radiographic or tomosynthetic imaging.
- Score: 5.357314252311141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: X-ray imaging is the most popular medical imaging technology. While x-ray
radiography is rather cost-effective, tissue structures are superimposed along
the x-ray paths. On the other hand, computed tomography (CT) reconstructs
internal structures but CT increases radiation dose, is complicated and
expensive. Here we propose "x-ray dissectography" to extract a target
organ/tissue digitally from few radiographic projections for stereographic and
tomographic analysis in the deep learning framework. As an exemplary
embodiment, we propose a general X-ray dissectography network, a dedicated
X-ray stereotography network, and the X-ray imaging systems to implement these
functionalities. Our experiments show that x-ray stereography can be achieved
of an isolated organ such as the lungs in this case, suggesting the feasibility
of transforming conventional radiographic reading to the stereographic
examination of the isolated organ, which potentially allows higher sensitivity
and specificity, and even tomographic visualization of the target. With further
improvements, x-ray dissectography promises to be a new x-ray imaging modality
for CT-grade diagnosis at radiation dose and system cost comparable to that of
radiographic or tomosynthetic imaging.
Related papers
- Multi-view X-ray Image Synthesis with Multiple Domain Disentanglement from CT Scans [10.72672892416061]
Over-dosed X-rays superimpose potential risks to human health to some extent.
Data-driven algorithms from volume scans to X-ray images are restricted by the scarcity of paired X-ray and volume data.
We propose CT2X-GAN to synthesize the X-ray images in an end-to-end manner using the content and style disentanglement from three different image domains.
arXiv Detail & Related papers (2024-04-18T04:25:56Z) - XProspeCT: CT Volume Generation from Paired X-Rays [0.0]
We build on previous research to convert X-ray images into simulated CT volumes.
Model variations include UNet architectures, custom connections, activation functions, loss functions, and a novel back projection approach.
arXiv Detail & Related papers (2024-02-11T21:57:49Z) - UMedNeRF: Uncertainty-aware Single View Volumetric Rendering for Medical
Neural Radiance Fields [38.62191342903111]
We propose an Uncertainty-aware MedNeRF (UMedNeRF) network based on generated radiation fields.
We show the results of CT projection rendering with a single X-ray and compare our method with other methods based on generated radiation fields.
arXiv Detail & Related papers (2023-11-10T02:47:15Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - X-Ray2EM: Uncertainty-Aware Cross-Modality Image Reconstruction from
X-Ray to Electron Microscopy in Connectomics [55.6985304397137]
We propose an uncertainty-aware 3D reconstruction model that translates X-ray images to EM-like images with enhanced membrane segmentation quality.
This shows its potential for developing simpler, faster, and more accurate X-ray based connectomics pipelines.
arXiv Detail & Related papers (2023-03-02T00:52:41Z) - Improving Computed Tomography (CT) Reconstruction via 3D Shape Induction [3.1498833540989413]
We propose shape induction, that is, learning the shape of 3D CT from X-ray without CT supervision, as a novel technique to incorporate realistic X-ray distributions during training of a reconstruction model.
Our experiments demonstrate that this process improves both the perceptual quality of generated CT and the accuracy of down-stream classification of pulmonary infectious diseases.
arXiv Detail & Related papers (2022-08-23T13:06:02Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.