Bone Structures Extraction and Enhancement in Chest Radiographs via CNN
Trained on Synthetic Data
- URL: http://arxiv.org/abs/2003.10839v1
- Date: Fri, 20 Mar 2020 20:27:50 GMT
- Title: Bone Structures Extraction and Enhancement in Chest Radiographs via CNN
Trained on Synthetic Data
- Authors: Ophir Gozes and Hayit Greenspan
- Abstract summary: We present a deep learning-based image processing technique for extraction of bone structures in chest radiographs using a U-Net FCNN.
The U-Net was trained to accomplish the task in a fully supervised setting.
We show that our enhancement technique is applicable to real x-ray data, and display our results on the NIH Chest X-Ray-14 dataset.
- Score: 2.969705152497174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a deep learning-based image processing technique
for extraction of bone structures in chest radiographs using a U-Net FCNN. The
U-Net was trained to accomplish the task in a fully supervised setting. To
create the training image pairs, we employed simulated X-Ray or Digitally
Reconstructed Radiographs (DRR), derived from 664 CT scans belonging to the
LIDC-IDRI dataset. Using HU based segmentation of bone structures in the CT
domain, a synthetic 2D "Bone x-ray" DRR is produced and used for training the
network. For the reconstruction loss, we utilize two loss functions- L1 Loss
and perceptual loss. Once the bone structures are extracted, the original image
can be enhanced by fusing the original input x-ray and the synthesized "Bone
X-ray". We show that our enhancement technique is applicable to real x-ray
data, and display our results on the NIH Chest X-Ray-14 dataset.
Related papers
- DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays [41.393567374399524]
We propose DiffuX2CT, which models CT reconstruction from ultra-sparse X-rays as a conditional diffusion process.
By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays.
As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays.
arXiv Detail & Related papers (2024-07-18T14:20:04Z) - End-to-End Model-based Deep Learning for Dual-Energy Computed Tomography Material Decomposition [53.14236375171593]
We propose a deep learning procedure called End-to-End Material Decomposition (E2E-DEcomp) for quantitative material decomposition.
We show the effectiveness of the proposed direct E2E-DEcomp method on the AAPM spectral CT dataset.
arXiv Detail & Related papers (2024-06-01T16:20:59Z) - Multi-view X-ray Image Synthesis with Multiple Domain Disentanglement from CT Scans [10.72672892416061]
Over-dosed X-rays superimpose potential risks to human health to some extent.
Data-driven algorithms from volume scans to X-ray images are restricted by the scarcity of paired X-ray and volume data.
We propose CT2X-GAN to synthesize the X-ray images in an end-to-end manner using the content and style disentanglement from three different image domains.
arXiv Detail & Related papers (2024-04-18T04:25:56Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Radiative Gaussian Splatting for Efficient X-ray Novel View Synthesis [88.86777314004044]
We propose a 3D Gaussian splatting-based framework, namely X-Gaussian, for X-ray novel view visualization.
Experiments show that our X-Gaussian outperforms state-of-the-art methods by 6.5 dB while enjoying less than 15% training time and over 73x inference speed.
arXiv Detail & Related papers (2024-03-07T00:12:08Z) - CT Reconstruction from Few Planar X-rays with Application towards
Low-resource Radiotherapy [20.353246282326943]
We propose a method to generate CT volumes from few (5) planar X-ray observations using a prior data distribution.
To focus the generation task on clinically-relevant features, our model can also leverage anatomical guidance during training.
Our method is better than recent sparse CT reconstruction baselines in terms of standard pixel and structure-level metrics.
arXiv Detail & Related papers (2023-08-04T01:17:57Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - End-To-End Convolutional Neural Network for 3D Reconstruction of Knee
Bones From Bi-Planar X-Ray Images [6.645111950779666]
We present an end-to-end Convolutional Neural Network (CNN) approach for 3D reconstruction of knee bones directly from two bi-planar X-ray images.
arXiv Detail & Related papers (2020-04-02T08:37:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.