Evaluating the Robustness of Self-Supervised Learning in Medical Imaging
- URL: http://arxiv.org/abs/2105.06986v1
- Date: Fri, 14 May 2021 17:49:52 GMT
- Title: Evaluating the Robustness of Self-Supervised Learning in Medical Imaging
- Authors: Fernando Navarro, Christopher Watanabe, Suprosanna Shit, Anjany
Sekuboyina, Jan C. Peeken, Stephanie E. Combs and Bjoern H. Menze
- Abstract summary: Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets.
We show that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging.
- Score: 57.20012795524752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervision has demonstrated to be an effective learning strategy when
training target tasks on small annotated data-sets. While current research
focuses on creating novel pretext tasks to learn meaningful and reusable
representations for the target task, these efforts obtain marginal performance
gains compared to fully-supervised learning. Meanwhile, little attention has
been given to study the robustness of networks trained in a self-supervised
manner. In this work, we demonstrate that networks trained via self-supervised
learning have superior robustness and generalizability compared to
fully-supervised learning in the context of medical imaging. Our experiments on
pneumonia detection in X-rays and multi-organ segmentation in CT yield
consistent results exposing the hidden benefits of self-supervision for
learning robust feature representations.
Related papers
- A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images [71.26717896083433]
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
arXiv Detail & Related papers (2023-09-05T19:45:09Z) - Semi-Supervised Relational Contrastive Learning [8.5285439285139]
We present a novel semi-supervised learning model that leverages self-supervised contrastive loss and consistency.
We validate against the ISIC 2018 Challenge benchmark skin lesion classification and demonstrate the effectiveness of our method on varying amounts of labeled data.
arXiv Detail & Related papers (2023-04-11T08:14:30Z) - Functional Knowledge Transfer with Self-supervised Representation
Learning [11.566644244783305]
This work investigates the unexplored usability of self-supervised representation learning in the direction of functional knowledge transfer.
In this work, functional knowledge transfer is achieved by joint optimization of self-supervised learning pseudo task and supervised learning task.
arXiv Detail & Related papers (2023-03-12T21:14:59Z) - Composite Learning for Robust and Effective Dense Predictions [81.2055761433725]
Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task.
We find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks.
arXiv Detail & Related papers (2022-10-13T17:59:16Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Improving colonoscopy lesion classification using semi-supervised deep
learning [2.568264809297699]
Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data.
We demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images.
arXiv Detail & Related papers (2020-09-07T15:25:35Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.