Dive into Self-Supervised Learning for Medical Image Analysis: Data,
Models and Tasks
- URL: http://arxiv.org/abs/2209.12157v2
- Date: Mon, 17 Apr 2023 03:16:02 GMT
- Title: Dive into Self-Supervised Learning for Medical Image Analysis: Data,
Models and Tasks
- Authors: Chuyan Zhang and Yun Gu
- Abstract summary: Self-supervised learning has achieved remarkable performance in various medical imaging tasks by dint of priors from massive unlabelled data.
We focus on exploiting the capacity of SSL in terms of four realistic and significant issues.
We provide a large-scale, in-depth and fine-grained study through extensive experiments on predictive, contrastive, generative and multi-SSL algorithms.
- Score: 8.720079280914169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) has achieved remarkable performance in various
medical imaging tasks by dint of priors from massive unlabelled data. However,
regarding a specific downstream task, there is still a lack of an instruction
book on how to select suitable pretext tasks and implementation details
throughout the standard ``pretrain-then-finetune'' workflow. In this work, we
focus on exploiting the capacity of SSL in terms of four realistic and
significant issues: (1) the impact of SSL on imbalanced datasets, (2) the
network architecture, (3) the applicability of upstream tasks to downstream
tasks and (4) the stacking effect of SSL and common policies for deep learning.
We provide a large-scale, in-depth and fine-grained study through extensive
experiments on predictive, contrastive, generative and multi-SSL algorithms.
Based on the results, we have uncovered several insights. Positively, SSL
advances class-imbalanced learning mainly by boosting the performance of the
rare class, which is of interest to clinical diagnosis. Unfortunately, SSL
offers marginal or even negative returns in some cases, including severely
imbalanced and relatively balanced data regimes, as well as combinations with
common training policies. Our intriguing findings provide practical guidelines
for the usage of SSL in the medical context and highlight the need for
developing universal pretext tasks to accommodate diverse application
scenarios.
Related papers
- A Survey of the Self Supervised Learning Mechanisms for Vision Transformers [5.152455218955949]
The application of self supervised learning (SSL) in vision tasks has gained significant attention.
We develop a comprehensive taxonomy of systematically classifying the SSL techniques.
We discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field.
arXiv Detail & Related papers (2024-08-30T07:38:28Z) - Self-supervised visual learning in the low-data regime: a comparative evaluation [40.27083924454058]
Self-Supervised Learning (SSL) is a robust training methodology for contemporary Deep Neural Networks (DNNs)
This work introduces a taxonomy of modern visual SSL methods, accompanied by detailed explanations and insights regarding the main categories of approaches.
For domain-specific downstream tasks, in-domain low-data SSL pretraining outperforms the common approach of large-scale pretraining.
arXiv Detail & Related papers (2024-04-26T07:23:14Z) - Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls
and Opportunities [50.231837687221685]
Self-supervised learning (SSL) has transformed machine learning and its many real world applications.
Unsupervised anomaly detection (AD) has also capitalized on SSL, by self-generating pseudo-anomalies.
arXiv Detail & Related papers (2023-08-28T07:55:01Z) - A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends [82.64268080902742]
Self-supervised learning (SSL) aims to learn discriminative features from unlabeled data without relying on human-annotated labels.
SSL has garnered significant attention recently, leading to the development of numerous related algorithms.
This paper presents a review of diverse SSL methods, encompassing algorithmic aspects, application domains, three key trends, and open research questions.
arXiv Detail & Related papers (2023-01-13T14:41:05Z) - Understanding and Improving the Role of Projection Head in
Self-Supervised Learning [77.59320917894043]
Self-supervised learning (SSL) aims to produce useful feature representations without access to human-labeled data annotations.
Current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective.
This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training?
arXiv Detail & Related papers (2022-12-22T05:42:54Z) - Benchmarking Self-Supervised Learning on Diverse Pathology Datasets [10.868779327544688]
Self-supervised learning has shown to be an effective method for utilizing unlabeled data.
We execute the largest-scale study of SSL pre-training on pathology image data.
For the first time, we apply SSL to the challenging task of nuclei instance segmentation.
arXiv Detail & Related papers (2022-12-09T06:38:34Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - DATA: Domain-Aware and Task-Aware Pre-training [94.62676913928831]
We present DATA, a simple yet effective NAS approach specialized for self-supervised learning (SSL)
Our method achieves promising results across a wide range of computation costs on downstream tasks, including image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2022-03-17T02:38:49Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - Improving Self-supervised Learning with Hardness-aware Dynamic
Curriculum Learning: An Application to Digital Pathology [2.2742357407157847]
Self-supervised learning (SSL) has recently shown tremendous potential to learn generic visual representations useful for many image analysis tasks.
The existing SSL methods fail to generalize to downstream tasks when the number of labeled training instances is small or if the domain shift between the transfer domains is significant.
This paper attempts to improve self-supervised pretrained representations through the lens of curriculum learning.
arXiv Detail & Related papers (2021-08-16T15:44:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.