LeDNet: Localization-enabled Deep Neural Network for Multi-Label Radiography Image Classification
- URL: http://arxiv.org/abs/2407.03931v1
- Date: Thu, 4 Jul 2024 13:46:30 GMT
- Title: LeDNet: Localization-enabled Deep Neural Network for Multi-Label Radiography Image Classification
- Authors: Lalit Pant, Shubham Arora,
- Abstract summary: Multi-label radiography image classification has long been a topic of interest in neural networks research.
We will use the chest x-ray images to detect thoracic diseases for this purpose.
We propose a combination of localization and deep learning algorithms called LeDNet to predict thoracic diseases with higher accuracy.
- Score: 0.1227734309612871
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-label radiography image classification has long been a topic of interest in neural networks research. In this paper, we intend to classify such images using convolution neural networks with novel localization techniques. We will use the chest x-ray images to detect thoracic diseases for this purpose. For accurate diagnosis, it is crucial to train the network with good quality images. But many chest X-ray images have irrelevant external objects like distractions created by faulty scans, electronic devices scanned next to lung region, scans inadvertently capturing bodily air etc. To address these, we propose a combination of localization and deep learning algorithms called LeDNet to predict thoracic diseases with higher accuracy. We identify and extract the lung region masks from chest x-ray images through localization. These masks are superimposed on the original X-ray images to create the mask overlay images. DenseNet-121 classification models are then used for feature selection to retrieve features of the entire chest X-ray images and the localized mask overlay images. These features are then used to predict disease classification. Our experiments involve comparing classification results obtained with original CheXpert images and mask overlay images. The comparison is demonstrated through accuracy and loss curve analyses.
Related papers
- Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Artificial Intelligence for Automatic Detection and Classification
Disease on the X-Ray Images [0.0]
This work presents rapid detection of diseases in the lung using the efficient Deep learning pre-trained RepVGG algorithm.
We are applying Artificial Intelligence technology for automatic highlighted detection of affected areas of people's lungs.
arXiv Detail & Related papers (2022-11-14T03:51:12Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Interpretation of Chest x-rays affected by bullets using deep transfer
learning [0.8189696720657246]
Deep learning in radiology provides the opportunity to classify, detect and segment different diseases automatically.
In the proposed study, we worked on a non-trivial aspect of medical imaging where we classified and localized the X-Rays affected by bullets.
This is the first study on the detection and classification of radiographs affected by bullets using deep learning.
arXiv Detail & Related papers (2022-03-25T05:53:45Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease
Localization in X-ray images [35.18562405272593]
Cross-region and cross-image relationship, as contextual and compensating information, is vital to obtain more consistent and integral regions.
We propose the Graph Regularized Embedding Network (GREN), which leverages the intra-image and inter-image information to locate diseases on chest X-ray images.
By means of this, our approach achieves the state-of-the-art result on NIH chest X-ray dataset for weakly-supervised disease localization.
arXiv Detail & Related papers (2021-07-14T01:27:07Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z) - Comparing Different Deep Learning Architectures for Classification of
Chest Radiographs [0.0]
Most models to classify chest radiographs are derived from deep neural networks, trained on large image-datasets.
We show that smaller networks have the potential to classify chest radiographs as precisely as deeper neural networks.
arXiv Detail & Related papers (2020-02-20T19:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.