Using Multiple Dermoscopic Photographs of One Lesion Improves Melanoma
Classification via Deep Learning: A Prognostic Diagnostic Accuracy Study
- URL: http://arxiv.org/abs/2306.02800v1
- Date: Mon, 5 Jun 2023 11:55:57 GMT
- Title: Using Multiple Dermoscopic Photographs of One Lesion Improves Melanoma
Classification via Deep Learning: A Prognostic Diagnostic Accuracy Study
- Authors: Achim Hekler, Roman C. Maron, Sarah Haggenm\"uller, Max Schmitt,
Christoph Wies, Jochen S. Utikal, Friedegund Meier, Sarah Hobelsberger, Frank
F. Gellrich, Mildred Sergon, Axel Hauschild, Lars E. French, Lucie
Heinzerling, Justin G. Schlager, Kamran Ghoreschi, Max Schlaak, Franz J.
Hilke, Gabriela Poch, S\"oren Korsing, Carola Berking, Markus V. Heppt,
Michael Erdmann, Sebastian Haferkamp, Konstantin Drexler, Dirk Schadendorf,
Wiebke Sondermann, Matthias Goebeler, Bastian Schilling, Jakob N. Kather, Eva
Krieghoff-Henning, Titus J. Brinker
- Abstract summary: This study evaluated the impact of multiple real-world dermoscopic views of a single lesion of interest on a CNN-based melanoma classifier.
Using multiple real-world images is an inexpensive method to positively impact the performance of a CNN-based melanoma classifier.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: Convolutional neural network (CNN)-based melanoma classifiers
face several challenges that limit their usefulness in clinical practice.
Objective: To investigate the impact of multiple real-world dermoscopic views
of a single lesion of interest on a CNN-based melanoma classifier.
Methods: This study evaluated 656 suspected melanoma lesions. Classifier
performance was measured using area under the receiver operating characteristic
curve (AUROC), expected calibration error (ECE) and maximum confidence change
(MCC) for (I) a single-view scenario, (II) a multiview scenario using multiple
artificially modified images per lesion and (III) a multiview scenario with
multiple real-world images per lesion.
Results: The multiview approach with real-world images significantly
increased the AUROC from 0.905 (95% CI, 0.879-0.929) in the single-view
approach to 0.930 (95% CI, 0.909-0.951). ECE and MCC also improved
significantly from 0.131 (95% CI, 0.105-0.159) to 0.072 (95% CI: 0.052-0.093)
and from 0.149 (95% CI, 0.125-0.171) to 0.115 (95% CI: 0.099-0.131),
respectively. Comparing multiview real-world to artificially modified images
showed comparable diagnostic accuracy and uncertainty estimation, but
significantly worse robustness for the latter.
Conclusion: Using multiple real-world images is an inexpensive method to
positively impact the performance of a CNN-based melanoma classifier.
Related papers
- Classification of Endoscopy and Video Capsule Images using CNN-Transformer Model [1.0994755279455526]
This study proposes a hybrid model that combines the advantages of Transformers and Convolutional Neural Networks (CNNs) to enhance classification performance.
For the GastroVision dataset, our proposed model demonstrates excellent performance with Precision, Recall, F1 score, Accuracy, and Matthews Correlation Coefficient (MCC) of 0.8320, 0.8386, 0.8324, 0.8386, and 0.8191, respectively.
arXiv Detail & Related papers (2024-08-20T11:05:32Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Clinical Melanoma Diagnosis with Artificial Intelligence: Insights from
a Prospective Multicenter Study [1.2397589403129072]
AI has proven to be helpful for enhancing melanoma detection.
Existing studies are limited by low sample sizes, too homogenous datasets, or lack of inclusion of rare melanoma subtypes.
We assessed 'All Data are Ext' (ADAE), an established open-source algorithm for detecting melanomas, by comparing its diagnostic accuracy to that of dermatologists.
arXiv Detail & Related papers (2024-01-25T14:03:54Z) - Vision Transformer for Efficient Chest X-ray and Gastrointestinal Image
Classification [2.3293678240472517]
This study uses different CNNs and transformer-based methods with a wide range of data augmentation techniques.
We evaluated their performance on three medical image datasets from different modalities.
arXiv Detail & Related papers (2023-04-23T04:07:03Z) - Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction [3.9874211732430447]
We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients.
We also trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection.
The U-Net performed superior compared to unprocessed and TV-processed images with respect to image quality and automated hemorrhage diagnosis.
arXiv Detail & Related papers (2023-03-16T14:21:45Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Corneal endothelium assessment in specular microscopy images with Fuchs'
dystrophy via deep regression of signed distance maps [48.498376125522114]
This paper proposes a UNet-based segmentation approach that requires minimal post-processing.
It achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy.
arXiv Detail & Related papers (2022-10-13T15:34:20Z) - Visualizing CoAtNet Predictions for Aiding Melanoma Detection [0.0]
This paper proposes a multi-class classification task using the CoAtNet architecture.
It achieves an overall precision of 0.901, recall 0.895, and AP 0.923, indicating high performance compared to other state-of-the-art networks.
arXiv Detail & Related papers (2022-05-21T06:41:52Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.