On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy
- URL: http://arxiv.org/abs/2106.13497v1
- Date: Fri, 25 Jun 2021 08:32:45 GMT
- Title: On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy
- Authors: Vignesh Srinivasan, Nils Strodthoff, Jackie Ma, Alexander Binder,
Klaus-Robert M\"uller, Wojciech Samek
- Abstract summary: We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
- Score: 70.71457102672545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is an increasing number of medical use-cases where classification
algorithms based on deep neural networks reach performance levels that are
competitive with human medical experts. To alleviate the challenges of small
dataset sizes, these systems often rely on pretraining. In this work, we aim to
assess the broader implications of these approaches. For diabetic retinopathy
grading as exemplary use case, we compare the impact of different training
procedures including recently established self-supervised pretraining methods
based on contrastive learning. To this end, we investigate different aspects
such as quantitative performance, statistics of the learned feature
representations, interpretability and robustness to image distortions. Our
results indicate that models initialized from ImageNet pretraining report a
significant increase in performance, generalization and robustness to image
distortions. In particular, self-supervised models show further benefits to
supervised models. Self-supervised models with initialization from ImageNet
pretraining not only report higher performance, they also reduce overfitting to
large lesions along with improvements in taking into account minute lesions
indicative of the progression of the disease. Understanding the effects of
pretraining in a broader sense that goes beyond simple performance comparisons
is of crucial importance for the broader medical imaging community beyond the
use-case considered in this work.
Related papers
- Less is More: Selective Reduction of CT Data for Self-Supervised Pre-Training of Deep Learning Models with Contrastive Learning Improves Downstream Classification Performance [7.945551345449388]
Current findings indicate a strong potential for contrastive pre-training on medical images.
We hypothesize that the similarity of medical images hinders the success of contrastive learning in the medical imaging domain.
We investigate different strategies based on deep embedding, information theory, and hashing in order to identify and reduce redundancy in medical pre-training datasets.
arXiv Detail & Related papers (2024-10-18T15:08:05Z) - Boosting Few-Shot Learning with Disentangled Self-Supervised Learning and Meta-Learning for Medical Image Classification [8.975676404678374]
We present a strategy for improving the performance and generalization capabilities of models trained in low-data regimes.
The proposed method starts with a pre-training phase, where features learned in a self-supervised learning setting are disentangled to improve the robustness of the representations for downstream tasks.
We then introduce a meta-fine-tuning step, leveraging related classes between meta-training and meta-testing phases but varying the level.
arXiv Detail & Related papers (2024-03-26T09:36:20Z) - A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images [71.26717896083433]
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
arXiv Detail & Related papers (2023-09-05T19:45:09Z) - Realistic Data Enrichment for Robust Image Segmentation in
Histopathology [2.248423960136122]
We propose a new approach, based on diffusion models, which can enrich an imbalanced dataset with plausible examples from underrepresented groups.
Our method can simply expand limited clinical datasets making them suitable to train machine learning pipelines.
arXiv Detail & Related papers (2023-04-19T09:52:50Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Improved skin lesion recognition by a Self-Supervised Curricular Deep
Learning approach [0.0]
State-of-the-art deep learning approaches for skin lesion recognition often require pretraining on larger and more varied datasets.
ImageNet is often used as the pretraining dataset, but its transferring potential is hindered by the domain gap between the source dataset and the target dermatoscopic scenario.
In this work, we introduce a novel pretraining approach that sequentially trains a series of Self-Supervised Learning pretext tasks.
arXiv Detail & Related papers (2021-12-22T17:45:47Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Self-Supervised Learning from Unlabeled Fundus Photographs Improves
Segmentation of the Retina [4.815051667870375]
Fundus photography is the primary method for retinal imaging and essential for diabetic retinopathy prevention.
Current segmentation methods are not robust towards the diversity in imaging conditions and pathologies typical for real-world clinical applications.
We utilize contrastive self-supervised learning to exploit the large variety of unlabeled fundus images in the publicly available EyePACS dataset.
arXiv Detail & Related papers (2021-08-05T18:02:56Z) - Evaluation of Complexity Measures for Deep Learning Generalization in
Medical Image Analysis [77.34726150561087]
PAC-Bayes flatness-based and path norm-based measures produce the most consistent explanation for the combination of models and data.
We also investigate the use of multi-task classification and segmentation approach for breast images.
arXiv Detail & Related papers (2021-03-04T20:58:22Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.