Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations
- URL: http://arxiv.org/abs/2005.03824v1
- Date: Fri, 8 May 2020 02:16:17 GMT
- Title: Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations
- Authors: John McManigle, Raquel Bartz, Lawrence Carin
- Abstract summary: This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
- Score: 70.0118756144807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last decade, convolutional neural networks (CNNs) have emerged as
the leading algorithms in image classification and segmentation. Recent
publication of large medical imaging databases have accelerated their use in
the biomedical arena. While training data for photograph classification
benefits from aggressive geometric augmentation, medical diagnosis --
especially in chest radiographs -- depends more strongly on feature location.
Diagnosis classification results may be artificially enhanced by reliance on
radiographic annotations. This work introduces a general pre-processing step
for chest x-ray input into machine learning algorithms. A modified Y-Net
architecture based on the VGG11 encoder is used to simultaneously learn
geometric orientation (similarity transform parameters) of the chest and
segmentation of radiographic annotations. Chest x-rays were obtained from
published databases. The algorithm was trained with 1000 manually labeled
images with augmentation. Results were evaluated by expert clinicians, with
acceptable geometry in 95.8% and annotation mask in 96.2% (n=500), compared to
27.0% and 34.9% respectively in control images (n=241). We hypothesize that
this pre-processing step will improve robustness in future diagnostic
algorithms.
Related papers
- Modifying the U-Net's Encoder-Decoder Architecture for Segmentation of Tumors in Breast Ultrasound Images [0.0]
We propose a Neural Network (NN) based on U-Net and an encoder-decoder architecture.
Our network (CResU-Net) obtained 76.88%, 71.5%, 90.3%, and 97.4% in terms of Dice similarity coefficients (DSC), Intersection over Union (IoU), Area under curve (AUC), and global accuracy (ACC), respectively, on BUSI dataset.
arXiv Detail & Related papers (2024-09-01T07:47:48Z) - Medical Image Analysis for Detection, Treatment and Planning of Disease using Artificial Intelligence Approaches [1.6505331001136514]
A framework for the segmentation of X-ray images using artificial intelligence techniques has been discussed.
The proposed approach performs better in all respect of well-known parameters with 16 batch size and 50 epochs.
The value of validation accuracy, precision, and recall of SegNet and Residual Unet models are 0.9815, 0.9699, 0.9574, and 0.9901, 0.9864, 0.9750 respectively.
arXiv Detail & Related papers (2024-05-18T13:43:43Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Classification of COVID-19 in Chest X-ray Images Using Fusion of Deep
Features and LightGBM [0.0]
We propose a new technique that is faster and more accurate than the other methods reported in the literature.
The proposed method uses a combination of DenseNet169 and MobileNet Deep Neural Networks to extract the features of the patient's X-ray images.
The method achieved 98.54% and 91.11% accuracies in the two-class (COVID-19, Healthy) and multi-class (COVID-19, Healthy, Pneumonia) classification problems.
arXiv Detail & Related papers (2022-06-09T14:56:24Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Development of an algorithm for medical image segmentation of bone
tissue in interaction with metallic implants [58.720142291102135]
This study develops an algorithm for calculating bone growth in contact with metallic implants.
Bone and implant tissue were manually segmented in the training data set.
In terms of network accuracy, the model reached around 98%.
arXiv Detail & Related papers (2022-04-22T08:17:20Z) - DenseNet approach to segmentation and classification of dermatoscopic
skin lesions images [0.0]
This paper proposes an improved method for segmentation and classification for skin lesions using two architectures.
The combination of U-Net and DenseNet121 provides acceptable results in dermatoscopic image analysis.
cancerous and non-cancerous samples were detected in DenseNet121 network with 79.49% and 93.11% accuracy respectively.
arXiv Detail & Related papers (2021-10-09T19:12:23Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.