A Knowledge-based Learning Framework for Self-supervised Pre-training
Towards Enhanced Recognition of Medical Images
- URL: http://arxiv.org/abs/2211.14715v1
- Date: Sun, 27 Nov 2022 03:58:58 GMT
- Title: A Knowledge-based Learning Framework for Self-supervised Pre-training
Towards Enhanced Recognition of Medical Images
- Authors: Wei Chen, Chen Li, Dan Chen, Xin Luo
- Abstract summary: This study proposes a knowledge-based learning framework towards enhanced recognition of medical images.
It works in three phases by synergizing contrastive learning and generative learning models.
The proposed framework statistically excels in self-supervised benchmarks, achieving 2.08, 1.23, 1.12, 0.76 and 1.38 percentage points improvements over SimCLR in AUC/Dice.
- Score: 14.304996977665212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised pre-training has become the priory choice to establish
reliable models for automated recognition of massive medical images, which are
routinely annotation-free, without semantics, and without guarantee of quality.
Note that this paradigm is still at its infancy and limited by closely related
open issues: 1) how to learn robust representations in an unsupervised manner
from unlabelled medical images of low diversity in samples? and 2) how to
obtain the most significant representations demanded by a high-quality
segmentation? Aiming at these issues, this study proposes a knowledge-based
learning framework towards enhanced recognition of medical images, which works
in three phases by synergizing contrastive learning and generative learning
models: 1) Sample Space Diversification: Reconstructive proxy tasks have been
enabled to embed a priori knowledge with context highlighted to diversify the
expanded sample space; 2) Enhanced Representation Learning: Informative
noise-contrastive estimation loss regularizes the encoder to enhance
representation learning of annotation-free images; 3) Correlated Optimization:
Optimization operations in pre-training the encoder and the decoder have been
correlated via image restoration from proxy tasks, targeting the need for
semantic segmentation. Extensive experiments have been performed on various
public medical image datasets (e.g., CheXpert and DRIVE) against the
state-of-the-art counterparts (e.g., SimCLR and MoCo), and results demonstrate
that: The proposed framework statistically excels in self-supervised
benchmarks, achieving 2.08, 1.23, 1.12, 0.76 and 1.38 percentage points
improvements over SimCLR in AUC/Dice. The proposed framework achieves
label-efficient semi-supervised learning, e.g., reducing the annotation cost by
up to 99% in pathological classification.
Related papers
- Cross-model Mutual Learning for Exemplar-based Medical Image Segmentation [25.874281336821685]
Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
We introduce a novel Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
arXiv Detail & Related papers (2024-04-18T00:18:07Z) - AlignZeg: Mitigating Objective Misalignment for Zero-shot Semantic Segmentation [123.88875931128342]
A serious issue that harms the performance of zero-shot visual recognition is named objective misalignment.
We propose a novel architecture named AlignZeg, which embodies a comprehensive improvement of the segmentation pipeline.
Experiments demonstrate that AlignZeg markedly enhances zero-shot semantic segmentation.
arXiv Detail & Related papers (2024-04-08T16:51:33Z) - Augmentation is AUtO-Net: Augmentation-Driven Contrastive Multiview
Learning for Medical Image Segmentation [3.1002416427168304]
This thesis focuses on retinal blood vessel segmentation tasks.
It provides an extensive literature review of deep learning-based medical image segmentation approaches.
It proposes a novel efficient, simple multiview learning framework.
arXiv Detail & Related papers (2023-11-02T06:31:08Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Exploring Feature Representation Learning for Semi-supervised Medical
Image Segmentation [30.608293915653558]
We present a two-stage framework for semi-supervised medical image segmentation.
Key insight is to explore the feature representation learning with labeled and unlabeled (i.e., pseudo labeled) images.
A stage-adaptive contrastive learning method is proposed, containing a boundary-aware contrastive loss.
We present an aleatoric uncertainty-aware method, namely AUA, to generate higher-quality pseudo labels.
arXiv Detail & Related papers (2021-11-22T05:06:12Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.