Deep Learning for Chest X-ray Analysis: A Survey
- URL: http://arxiv.org/abs/2103.08700v1
- Date: Mon, 15 Mar 2021 20:28:16 GMT
- Title: Deep Learning for Chest X-ray Analysis: A Survey
- Authors: Ecem Sogancioglu, Erdi \c{C}all{\i}, Bram van Ginneken, Kicky G. van
Leeuwen, Keelin Murphy
- Abstract summary: Recent advances in deep learning have led to a promising performance in many medical image analysis tasks.
chest radiographs are a particularly important modality for which a variety of applications have been researched.
The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications.
- Score: 4.351399670578497
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent advances in deep learning have led to a promising performance in many
medical image analysis tasks. As the most commonly performed radiological exam,
chest radiographs are a particularly important modality for which a variety of
applications have been researched. The release of multiple, large, publicly
available chest X-ray datasets in recent years has encouraged research interest
and boosted the number of publications. In this paper, we review all studies
using deep learning on chest radiographs, categorizing works by task:
image-level prediction (classification and regression), segmentation,
localization, image generation and domain adaptation. Commercially available
applications are detailed, and a comprehensive discussion of the current state
of the art and potential future directions are provided.
Related papers
- A novel approach towards the classification of Bone Fracture from Musculoskeletal Radiography images using Attention Based Transfer Learning [0.0]
We deploy an attention-based transfer learning model to detect bone fractures in X-ray scans.
Our model achieves a state-of-the-art accuracy of more than 90% in fracture classification.
arXiv Detail & Related papers (2024-10-18T19:07:24Z) - Content-Based Image Retrieval for Multi-Class Volumetric Radiology Images: A Benchmark Study [0.6249768559720122]
We benchmark embeddings derived from pre-trained supervised models on medical images against embeddings derived from pre-trained unsupervised models on non-medical images.
For volumetric image retrieval, we adopt a late interaction re-ranking method inspired by text matching.
arXiv Detail & Related papers (2024-05-15T13:34:07Z) - Generation of Radiology Findings in Chest X-Ray by Leveraging
Collaborative Knowledge [6.792487817626456]
The cognitive task of interpreting medical images remains the most critical and often time-consuming step in the radiology workflow.
This work focuses on reducing the workload of radiologists who spend most of their time either writing or narrating the Findings.
Unlike past research, which addresses radiology report generation as a single-step image captioning task, we have further taken into consideration the complexity of interpreting CXR images.
arXiv Detail & Related papers (2023-06-18T00:51:28Z) - XrayGPT: Chest Radiographs Summarization using Medical Vision-Language
Models [60.437091462613544]
We introduce XrayGPT, a novel conversational medical vision-language model.
It can analyze and answer open-ended questions about chest radiographs.
We generate 217k interactive and high-quality summaries from free-text radiology reports.
arXiv Detail & Related papers (2023-06-13T17:59:59Z) - Computer Vision on X-ray Data in Industrial Production and Security
Applications: A survey [89.45221564651145]
This survey reviews the recent research on using computer vision and machine learning for X-ray analysis in industrial production and security applications.
It covers the applications, techniques, evaluation metrics, datasets, and performance comparison of those techniques on publicly available datasets.
arXiv Detail & Related papers (2022-11-10T13:37:36Z) - Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New
Benchmark Study [75.05049024176584]
We present a benchmark study of the long-tailed learning problem in the specific domain of thorax diseases on chest X-rays.
We focus on learning from naturally distributed chest X-ray data, optimizing classification accuracy over not only the common "head" classes, but also the rare yet critical "tail" classes.
The benchmark consists of two chest X-ray datasets for 19- and 20-way thorax disease classification, containing classes with as many as 53,000 and as few as 7 labeled training images.
arXiv Detail & Related papers (2022-08-29T04:34:15Z) - Interpretation of Chest x-rays affected by bullets using deep transfer
learning [0.8189696720657246]
Deep learning in radiology provides the opportunity to classify, detect and segment different diseases automatically.
In the proposed study, we worked on a non-trivial aspect of medical imaging where we classified and localized the X-Rays affected by bullets.
This is the first study on the detection and classification of radiographs affected by bullets using deep learning.
arXiv Detail & Related papers (2022-03-25T05:53:45Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Applications of Deep Learning in Fundus Images: A Review [27.70388285366776]
The use of fundus images for the early screening of eye diseases is of great clinical importance.
Deep learning is becoming more and more popular in related applications.
arXiv Detail & Related papers (2021-01-25T02:39:40Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.