An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis
- URL: http://arxiv.org/abs/2106.09229v1
- Date: Thu, 17 Jun 2021 03:47:36 GMT
- Title: An Evaluation of Self-Supervised Pre-Training for Skin-Lesion Analysis
- Authors: Levy Chaves, Alceu Bissoto, Eduardo Valle and Sandra Avila
- Abstract summary: Self-supervised pre-training appears as an advantageous alternative to supervised pre-trained for transfer learning.
By synthesizing annotations on pretext tasks, self-supervision allows to pre-train models on large amounts of pseudo-labels before fine-tuning them on the target task.
- Score: 14.466964262040136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised pre-training appears as an advantageous alternative to
supervised pre-trained for transfer learning. By synthesizing annotations on
pretext tasks, self-supervision allows to pre-train models on large amounts of
pseudo-labels before fine-tuning them on the target task. In this work, we
assess self-supervision for the diagnosis of skin lesions, comparing three
self-supervised pipelines to a challenging supervised baseline, on five test
datasets comprising in- and out-of-distribution samples. Our results show that
self-supervision is competitive both in improving accuracies and in reducing
the variability of outcomes. Self-supervision proves particularly useful for
low training data scenarios ($<1\,500$ and $<150$ samples), where its ability
to stabilize the outcomes is essential to provide sound results.
Related papers
- Self-Supervised Pretraining Improves Performance and Inference
Efficiency in Multiple Lung Ultrasound Interpretation Tasks [65.23740556896654]
We investigated whether self-supervised pretraining could produce a neural network feature extractor applicable to multiple classification tasks in lung ultrasound analysis.
When fine-tuning on three lung ultrasound tasks, pretrained models resulted in an improvement of the average across-task area under the receiver operating curve (AUC) by 0.032 and 0.061 on local and external test sets respectively.
arXiv Detail & Related papers (2023-09-05T21:36:42Z) - A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images [71.26717896083433]
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
arXiv Detail & Related papers (2023-09-05T19:45:09Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - On Transfer of Adversarial Robustness from Pretraining to Downstream
Tasks [1.8900691517352295]
We show that the robustness of a linear predictor on downstream tasks can be constrained by the robustness of its underlying representation.
Our results offer an initial step towards characterizing the requirements of the representation function for reliable post-adaptation performance.
arXiv Detail & Related papers (2022-08-07T23:00:40Z) - Improving In-Context Few-Shot Learning via Self-Supervised Training [48.801037246764935]
We propose to use self-supervision in an intermediate training stage between pretraining and downstream few-shot usage.
We find that the intermediate self-supervision stage produces models that outperform strong baselines.
arXiv Detail & Related papers (2022-05-03T18:01:07Z) - Better Self-training for Image Classification through Self-supervision [3.492636597449942]
Self-supervision is learning without manual supervision by solving an automatically-generated pretext task.
This paper investigates three ways of incorporating self-supervision into self-training to improve accuracy in image classification.
arXiv Detail & Related papers (2021-09-02T08:24:41Z) - Improve Unsupervised Pretraining for Few-label Transfer [80.58625921631506]
In this paper, we find this conclusion may not hold when the target dataset has very few labeled samples for finetuning.
We propose a new progressive few-label transfer algorithm for real applications.
arXiv Detail & Related papers (2021-07-26T17:59:56Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.