Comparing Different Deep Learning Architectures for Classification of
Chest Radiographs
- URL: http://arxiv.org/abs/2002.08991v1
- Date: Thu, 20 Feb 2020 19:47:16 GMT
- Title: Comparing Different Deep Learning Architectures for Classification of
Chest Radiographs
- Authors: Keno K. Bressem, Lisa Adams, Christoph Erxleben, Bernd Hamm, Stefan
Niehues, Janis Vahldiek
- Abstract summary: Most models to classify chest radiographs are derived from deep neural networks, trained on large image-datasets.
We show that smaller networks have the potential to classify chest radiographs as precisely as deeper neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chest radiographs are among the most frequently acquired images in radiology
and are often the subject of computer vision research. However, most of the
models used to classify chest radiographs are derived from openly available
deep neural networks, trained on large image-datasets. These datasets routinely
differ from chest radiographs in that they are mostly color images and contain
several possible image classes, while radiographs are greyscale images and
often only contain fewer image classes. Therefore, very deep neural networks,
which can represent more complex relationships in image-features, might not be
required for the comparatively simpler task of classifying grayscale chest
radiographs. We compared fifteen different architectures of artificial neural
networks regarding training-time and performance on the openly available
CheXpert dataset to identify the most suitable models for deep learning tasks
on chest radiographs. We could show, that smaller networks such as ResNet-34,
AlexNet or VGG-16 have the potential to classify chest radiographs as precisely
as deeper neural networks such as DenseNet-201 or ResNet-151, while being less
computationally demanding.
Related papers
- LeDNet: Localization-enabled Deep Neural Network for Multi-Label Radiography Image Classification [0.1227734309612871]
Multi-label radiography image classification has long been a topic of interest in neural networks research.
We will use the chest x-ray images to detect thoracic diseases for this purpose.
We propose a combination of localization and deep learning algorithms called LeDNet to predict thoracic diseases with higher accuracy.
arXiv Detail & Related papers (2024-07-04T13:46:30Z) - On the Feasibility of Deep Learning Classification from Raw Signal Data in Radiology, Ultrasonography and Electrophysiology [0.0]
The paper presents the main current applications of deep learning in radiography, ultrasonography, and electrophysiology.
It discusses whether the proposed neural network training directly on raw signals is feasible.
arXiv Detail & Related papers (2024-02-25T18:07:07Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Joint Modeling of Chest Radiographs and Radiology Reports for Pulmonary
Edema Assessment [39.60171837961607]
We develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time.
Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment.
arXiv Detail & Related papers (2020-08-22T17:28:39Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Evaluation of Contemporary Convolutional Neural Network Architectures
for Detecting COVID-19 from Chest Radiographs [0.0]
We train and evaluate three model architectures, proposed for chest radiograph analysis, under varying conditions.
We find issues that discount the impressive model performances proposed by contemporary studies on this subject.
arXiv Detail & Related papers (2020-06-30T15:22:39Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z) - Separation of target anatomical structure and occlusions in chest
radiographs [2.0478628221188497]
We propose a Fully Convolutional Network to suppress, for a specific task, undesired visual structure from radiographs.
The proposed algorithm creates reconstructed radiographs and ground-truth data from high resolution CT-scans.
arXiv Detail & Related papers (2020-02-03T14:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.