Semi-Supervised Relational Contrastive Learning
- URL: http://arxiv.org/abs/2304.05047v2
- Date: Tue, 13 Jun 2023 06:34:37 GMT
- Title: Semi-Supervised Relational Contrastive Learning
- Authors: Attiano Purpura-Pontoniere, Demetri Terzopoulos, Adam Wang,
Abdullah-Al-Zubaer Imran
- Abstract summary: We present a novel semi-supervised learning model that leverages self-supervised contrastive loss and consistency.
We validate against the ISIC 2018 Challenge benchmark skin lesion classification and demonstrate the effectiveness of our method on varying amounts of labeled data.
- Score: 8.5285439285139
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disease diagnosis from medical images via supervised learning is usually
dependent on tedious, error-prone, and costly image labeling by medical
experts. Alternatively, semi-supervised learning and self-supervised learning
offer effectiveness through the acquisition of valuable insights from readily
available unlabeled images. We present Semi-Supervised Relational Contrastive
Learning (SRCL), a novel semi-supervised learning model that leverages
self-supervised contrastive loss and sample relation consistency for the more
meaningful and effective exploitation of unlabeled data. Our experimentation
with the SRCL model explores both pre-train/fine-tune and joint learning of the
pretext (contrastive learning) and downstream (diagnostic classification)
tasks. We validate against the ISIC 2018 Challenge benchmark skin lesion
classification dataset and demonstrate the effectiveness of our semi-supervised
method on varying amounts of labeled data.
Related papers
- Integration of Self-Supervised BYOL in Semi-Supervised Medical Image Recognition [10.317372960942972]
We propose an innovative approach by integrating self-supervised learning into semi-supervised models to enhance medical image recognition.
Our approach optimally leverages unlabeled data, outperforming existing methods in terms of accuracy for medical image recognition.
arXiv Detail & Related papers (2024-04-16T09:12:16Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Contrastive learning for unsupervised medical image clustering and
reconstruction [0.23624125155742057]
We propose an unsupervised autoencoder framework which is augmented with a contrastive loss to encourage high separability in the latent space.
Our method achieves similar performance to the supervised architecture, indicating that separation in the latent space reproduces expert medical observer-assigned labels.
arXiv Detail & Related papers (2022-09-24T13:17:02Z) - Consistency-Based Semi-supervised Evidential Active Learning for
Diagnostic Radiograph Classification [2.3545156585418328]
We introduce a novel Consistency-based Semi-supervised Evidential Active Learning framework (CSEAL)
We leverage predictive uncertainty based on theories of evidence and subjective logic to develop an end-to-end integrated approach.
Our approach can substantially improve accuracy on rarer abnormalities with fewer labelled samples.
arXiv Detail & Related papers (2022-09-05T09:28:31Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Lesion-based Contrastive Learning for Diabetic Retinopathy Grading from
Fundus Images [2.498907460918493]
We propose a self-supervised framework, namely lesion-based contrastive learning for automated diabetic retinopathy grading.
Our proposed framework performs outstandingly on DR grading in terms of both linear evaluation and transfer capacity evaluation.
arXiv Detail & Related papers (2021-07-17T16:30:30Z) - Evaluating the Robustness of Self-Supervised Learning in Medical Imaging [57.20012795524752]
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets.
We show that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging.
arXiv Detail & Related papers (2021-05-14T17:49:52Z) - Dual-Consistency Semi-Supervised Learning with Uncertainty
Quantification for COVID-19 Lesion Segmentation from CT Images [49.1861463923357]
We propose an uncertainty-guided dual-consistency learning network (UDC-Net) for semi-supervised COVID-19 lesion segmentation from CT images.
Our proposed UDC-Net improves the fully supervised method by 6.3% in Dice and outperforms other competitive semi-supervised approaches by significant margins.
arXiv Detail & Related papers (2021-04-07T16:23:35Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Synergic Adversarial Label Learning for Grading Retinal Diseases via
Knowledge Distillation and Multi-task Learning [29.46896757506273]
Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases.
Some studies show that AMD and DR share some common features like hemorrhagic points and exudation but most classification algorithms only train those disease models independently.
We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner.
arXiv Detail & Related papers (2020-03-24T01:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.