SwinCheX: Multi-label classification on chest X-ray images with
transformers
- URL: http://arxiv.org/abs/2206.04246v1
- Date: Thu, 9 Jun 2022 03:17:57 GMT
- Title: SwinCheX: Multi-label classification on chest X-ray images with
transformers
- Authors: Sina Taslimi, Soroush Taslimi, Nima Fathi, Mohammadreza Salehi,
Mohammad Hossein Rohban
- Abstract summary: This paper proposes a multi-label classification deep model based on the Swin Transformer as the backbone to achieve state-of-the-art diagnosis classification.
We evaluate our model on one of the most widely-used and largest x-ray datasets called "Chest X-ray14"
- Score: 4.549831511476249
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: According to the considerable growth in the avail of chest X-ray images in
diagnosing various diseases, as well as gathering extensive datasets, having an
automated diagnosis procedure using deep neural networks has occupied the minds
of experts. Most of the available methods in computer vision use a CNN backbone
to acquire high accuracy on the classification problems. Nevertheless, recent
researches show that transformers, established as the de facto method in NLP,
can also outperform many CNN-based models in vision. This paper proposes a
multi-label classification deep model based on the Swin Transformer as the
backbone to achieve state-of-the-art diagnosis classification. It leverages
Multi-Layer Perceptron, also known as MLP, for the head architecture. We
evaluate our model on one of the most widely-used and largest x-ray datasets
called "Chest X-ray14," which comprises more than 100,000 frontal/back-view
images from over 30,000 patients with 14 famous chest diseases. Our model has
been tested with several number of MLP layers for the head setting, each
achieves a competitive AUC score on all classes. Comprehensive experiments on
Chest X-ray14 have shown that a 3-layer head attains state-of-the-art
performance with an average AUC score of 0.810, compared to the former SOTA
average AUC of 0.799. We propose an experimental setup for the fair
benchmarking of existing methods, which could be used as a basis for the future
studies. Finally, we followed up our results by confirming that the proposed
method attends to the pathologically relevant areas of the chest.
Related papers
- Pneumonia Detection on chest X-ray images Using Ensemble of Deep
Convolutional Neural Networks [7.232767871756102]
This paper presents a computer-aided classification of pneumonia, coined as Ensemble Learning (EL), to simplify the diagnosis process on chest X-ray images.
Our proposal is based on Convolutional Neural Network (CNN) models, which are pre-trained CNN models that have been recently employed to enhance the performance of many medical tasks instead of training CNN models from scratch.
The proposed EL approach outperforms other existing state-of-the-art methods, and it obtains an accuracy of 93.91% and a F1-Score of 93.88% on the testing phase.
arXiv Detail & Related papers (2023-12-13T08:28:21Z) - Auto-outlier Fusion Technique for Chest X-ray classification with
Multi-head Attention Mechanism [4.416665886445889]
A chest X-ray is one of the most widely available radiological examinations for diagnosing and detecting various lung illnesses.
The National Institutes of Health (NIH) provides an extensive database, ChestX-ray8 and ChestXray14, to help establish a deep learning community for analysing and predicting lung diseases.
arXiv Detail & Related papers (2022-11-15T09:35:49Z) - Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New
Benchmark Study [75.05049024176584]
We present a benchmark study of the long-tailed learning problem in the specific domain of thorax diseases on chest X-rays.
We focus on learning from naturally distributed chest X-ray data, optimizing classification accuracy over not only the common "head" classes, but also the rare yet critical "tail" classes.
The benchmark consists of two chest X-ray datasets for 19- and 20-way thorax disease classification, containing classes with as many as 53,000 and as few as 7 labeled training images.
arXiv Detail & Related papers (2022-08-29T04:34:15Z) - COVID-19 Severity Classification on Chest X-ray Images [0.0]
In this work, we classify covid images based on the severity of the infection.
The ResNet-50 model produced remarkable classification results in terms of accuracy 95%, recall (0.94), and F1-Score (0.92), and precision (0.91)
arXiv Detail & Related papers (2022-05-25T12:01:03Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - A Deep Learning Technique using a Sequence of Follow Up X-Rays for
Disease classification [3.3345134768053635]
The ability to predict lung and heart based diseases using deep learning techniques is central to many researchers.
We present a hypothesis that X-rays of patients included with the follow up history of their most recent three chest X-ray images would perform better in disease classification.
arXiv Detail & Related papers (2022-03-28T19:58:47Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Exploration of Interpretability Techniques for Deep COVID-19
Classification using Chest X-ray Images [10.01138352319106]
Five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their Ensemble have been used in this paper to classify COVID-19, pneumoniae and healthy subjects using Chest X-Ray images.
The mean Micro-F1 score of the models for COVID-19 classifications ranges from 0.66 to 0.875, and is 0.89 for the Ensemble of the network models.
arXiv Detail & Related papers (2020-06-03T22:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.