Treatment classification of posterior capsular opacification (PCO) using
automated ground truths
- URL: http://arxiv.org/abs/2211.06114v1
- Date: Fri, 11 Nov 2022 10:36:42 GMT
- Title: Treatment classification of posterior capsular opacification (PCO) using
automated ground truths
- Authors: Raisha Shrestha, Waree Kongprawechnon, Teesid Leelasawassuk, Nattapon
Wongcumchang, Oliver Findl, Nino Hirnschall
- Abstract summary: We propose a deep learning (DL)-based method to first segment PCO images then classify the images into textittreatment required and textitnot yet required cases.
To train the model, we prepare a training image set with ground truths (GT) obtained from two strategies: (i) manual and (ii) automated.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Determination of treatment need of posterior capsular opacification (PCO)--
one of the most common complication of cataract surgery -- is a difficult
process due to its local unavailability and the fact that treatment is provided
only after PCO occurs in the central visual axis. In this paper we propose a
deep learning (DL)-based method to first segment PCO images then classify the
images into \textit{treatment required} and \textit{not yet required} cases in
order to reduce frequent hospital visits. To train the model, we prepare a
training image set with ground truths (GT) obtained from two strategies: (i)
manual and (ii) automated. So, we have two models: (i) Model 1 (trained with
image set containing manual GT) (ii) Model 2 (trained with image set containing
automated GT). Both models when evaluated on validation image set gave Dice
coefficient value greater than 0.8 and intersection-over-union (IoU) score
greater than 0.67 in our experiments. Comparison between gold standard GT and
segmented results from our models gave a Dice coefficient value greater than
0.7 and IoU score greater than 0.6 for both the models showing that automated
ground truths can also result in generation of an efficient model. Comparison
between our classification result and clinical classification shows 0.98
F2-score for outputs from both the models.
Related papers
- TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - PD-L1 Classification of Weakly-Labeled Whole Slide Images of Breast Cancer [0.0]
This study aims to develop and compare models able to classify PD-L1 positivity of breast cancer samples based on WSI analysis.
The task consists of two phases: identifying regions of interest (ROI) and classifying tumors as PD-L1 positive or negative.
arXiv Detail & Related papers (2024-04-15T23:06:58Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - An Ensemble Method to Automatically Grade Diabetic Retinopathy with
Optical Coherence Tomography Angiography Images [4.640835690336653]
We propose an ensemble method to automatically grade Diabetic retinopathy (DR) images available from Diabetic Retinopathy Analysis Challenge (DRAC) 2022.
First, we adopt the state-of-the-art classification networks, and train them to grade UW- OCTA images with different splits of the available dataset.
Ultimately, we obtain 25 models, of which, the top 16 models are selected and ensembled to generate the final predictions.
arXiv Detail & Related papers (2022-12-12T22:06:47Z) - Early Diagnosis of Retinal Blood Vessel Damage via Deep Learning-Powered
Collective Intelligence Models [0.3670422696827525]
The power of swarm algorithms is used to search for various combinations of convolutional, pooling, and normalization layers to provide the best model for the task.
The best TDCN model achieves an accuracy of 90.3%, AUC ROC of 0.956, and a Cohen score of 0.967.
arXiv Detail & Related papers (2022-10-17T21:38:38Z) - COVID-19 Severity Classification on Chest X-ray Images [0.0]
In this work, we classify covid images based on the severity of the infection.
The ResNet-50 model produced remarkable classification results in terms of accuracy 95%, recall (0.94), and F1-Score (0.92), and precision (0.91)
arXiv Detail & Related papers (2022-05-25T12:01:03Z) - Application of Transfer Learning and Ensemble Learning in Image-level
Classification for Breast Histopathology [9.037868656840736]
In Computer-Aided Diagnosis (CAD), traditional classification models mostly use a single network to extract features.
This paper proposes a deep ensemble model based on image-level labels for the binary classification of benign and malignant lesions.
Result: In the ensemble network model with accuracy as the weight, the image-level binary classification achieves an accuracy of $98.90%$.
arXiv Detail & Related papers (2022-04-18T13:31:53Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.