CheXphoto: 10,000+ Photos and Transformations of Chest X-rays for
Benchmarking Deep Learning Robustness
- URL: http://arxiv.org/abs/2007.06199v2
- Date: Fri, 11 Dec 2020 10:11:00 GMT
- Title: CheXphoto: 10,000+ Photos and Transformations of Chest X-rays for
Benchmarking Deep Learning Robustness
- Authors: Nick A. Phillips, Pranav Rajpurkar, Mark Sabini, Rayan Krishnan,
Sharon Zhou, Anuj Pareek, Nguyet Minh Phu, Chris Wang, Mudit Jain, Nguyen
Duong Du, Steven QH Truong, Andrew Y. Ng, Matthew P. Lungren
- Abstract summary: We introduce CheXphoto, a dataset of smartphone photos and synthetic photographic transformations of chest x-rays sampled from the CheXpert dataset.
To generate CheXphoto we (1) automatically and manually captured photos of digital x-rays under different settings, and (2) generated synthetic transformations of digital x-rays targeted to make them look like photos of digital x-rays and x-ray films.
We release this dataset as a resource for testing and improving the robustness of deep learning algorithms for automated chest x-ray interpretation on smartphone photos of chest x-rays.
- Score: 6.269757571876924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clinical deployment of deep learning algorithms for chest x-ray
interpretation requires a solution that can integrate into the vast spectrum of
clinical workflows across the world. An appealing approach to scaled deployment
is to leverage the ubiquity of smartphones by capturing photos of x-rays to
share with clinicians using messaging services like WhatsApp. However, the
application of chest x-ray algorithms to photos of chest x-rays requires
reliable classification in the presence of artifacts not typically encountered
in digital x-rays used to train machine learning models. We introduce
CheXphoto, a dataset of smartphone photos and synthetic photographic
transformations of chest x-rays sampled from the CheXpert dataset. To generate
CheXphoto we (1) automatically and manually captured photos of digital x-rays
under different settings, and (2) generated synthetic transformations of
digital x-rays targeted to make them look like photos of digital x-rays and
x-ray films. We release this dataset as a resource for testing and improving
the robustness of deep learning algorithms for automated chest x-ray
interpretation on smartphone photos of chest x-rays.
Related papers
- LeDNet: Localization-enabled Deep Neural Network for Multi-Label Radiography Image Classification [0.1227734309612871]
Multi-label radiography image classification has long been a topic of interest in neural networks research.
We will use the chest x-ray images to detect thoracic diseases for this purpose.
We propose a combination of localization and deep learning algorithms called LeDNet to predict thoracic diseases with higher accuracy.
arXiv Detail & Related papers (2024-07-04T13:46:30Z) - Multi-view X-ray Image Synthesis with Multiple Domain Disentanglement from CT Scans [10.72672892416061]
Over-dosed X-rays superimpose potential risks to human health to some extent.
Data-driven algorithms from volume scans to X-ray images are restricted by the scarcity of paired X-ray and volume data.
We propose CT2X-GAN to synthesize the X-ray images in an end-to-end manner using the content and style disentanglement from three different image domains.
arXiv Detail & Related papers (2024-04-18T04:25:56Z) - Radiative Gaussian Splatting for Efficient X-ray Novel View Synthesis [88.86777314004044]
We propose a 3D Gaussian splatting-based framework, namely X-Gaussian, for X-ray novel view visualization.
Experiments show that our X-Gaussian outperforms state-of-the-art methods by 6.5 dB while enjoying less than 15% training time and over 73x inference speed.
arXiv Detail & Related papers (2024-03-07T00:12:08Z) - Multi-Label Chest X-Ray Classification via Deep Learning [0.0]
The goal of this paper is to develop a lightweight solution to detect 14 different chest conditions from an X ray image.
Along with the image features, we are also going to use non-image features available in the data such as X-ray view type, age, gender etc.
Our aim is to improve upon previous work, expand prediction to 14 diseases and provide insight for future chest radiography research.
arXiv Detail & Related papers (2022-11-27T20:27:55Z) - Artificial Intelligence for Automatic Detection and Classification
Disease on the X-Ray Images [0.0]
This work presents rapid detection of diseases in the lung using the efficient Deep learning pre-trained RepVGG algorithm.
We are applying Artificial Intelligence technology for automatic highlighted detection of affected areas of people's lungs.
arXiv Detail & Related papers (2022-11-14T03:51:12Z) - Computer Vision on X-ray Data in Industrial Production and Security
Applications: A survey [89.45221564651145]
This survey reviews the recent research on using computer vision and machine learning for X-ray analysis in industrial production and security applications.
It covers the applications, techniques, evaluation metrics, datasets, and performance comparison of those techniques on publicly available datasets.
arXiv Detail & Related papers (2022-11-10T13:37:36Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - CheXphotogenic: Generalization of Deep Learning Models for Chest X-ray
Interpretation to Photos of Chest X-rays [4.396061096553544]
We measured the diagnostic performance for 8 different chest x-ray models when applied to photos of chest x-rays.
Several models had a drop in performance when applied to photos of chest x-rays, but even with this drop, some models still performed comparably to radiologists.
arXiv Detail & Related papers (2020-11-12T00:16:51Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.