Learning Self-Supervised Representations for Label Efficient
Cross-Domain Knowledge Transfer on Diabetic Retinopathy Fundus Images
- URL: http://arxiv.org/abs/2304.11168v1
- Date: Thu, 20 Apr 2023 12:46:34 GMT
- Title: Learning Self-Supervised Representations for Label Efficient
Cross-Domain Knowledge Transfer on Diabetic Retinopathy Fundus Images
- Authors: Ekta Gupta, Varun Gupta, Muskaan Chopra, Prakash Chandra Chhipa and
Marcus Liwicki
- Abstract summary: This work presents a novel self-supervised representation learning-based approach for classifying diabetic retinopathy (DR) images in cross-domain settings.
The proposed method achieves state-of-the-art results on binary and multiclassification of DR images, even in cross-domain settings.
- Score: 2.796274924103132
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents a novel label-efficient selfsupervised representation
learning-based approach for classifying diabetic retinopathy (DR) images in
cross-domain settings. Most of the existing DR image classification methods are
based on supervised learning which requires a lot of time-consuming and
expensive medical domain experts-annotated data for training. The proposed
approach uses the prior learning from the source DR image dataset to classify
images drawn from the target datasets. The image representations learned from
the unlabeled source domain dataset through contrastive learning are used to
classify DR images from the target domain dataset. Moreover, the proposed
approach requires a few labeled images to perform successfully on DR image
classification tasks in cross-domain settings. The proposed work experiments
with four publicly available datasets: EyePACS, APTOS 2019, MESSIDOR-I, and
Fundus Images for self-supervised representation learning-based DR image
classification in cross-domain settings. The proposed method achieves
state-of-the-art results on binary and multiclassification of DR images, even
in cross-domain settings. The proposed method outperforms the existing DR image
binary and multi-class classification methods proposed in the literature. The
proposed method is also validated qualitatively using class activation maps,
revealing that the method can learn explainable image representations. The
source code and trained models are published on GitHub.
Related papers
- CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Few Shot Medical Image Segmentation with Cross Attention Transformer [30.54965157877615]
We propose a novel framework for few-shot medical image segmentation, termed CAT-Net.
Our proposed network mines the correlations between the support image and query image, limiting them to focus only on useful foreground information.
We validated the proposed method on three public datasets: Abd-CT, Abd-MRI, and Card-MRI.
arXiv Detail & Related papers (2023-03-24T09:10:14Z) - SSiT: Saliency-guided Self-supervised Image Transformer for Diabetic
Retinopathy Grading [2.0790896742002274]
Saliency-guided Self-Supervised image Transformer (SSiT) is proposed for Diabetic Retinopathy grading from fundus images.
We novelly introduce saliency maps into SSL, with a goal of guiding self-supervised pre-training with domain-specific prior knowledge.
arXiv Detail & Related papers (2022-10-20T02:35:26Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - Self-Supervised Generative Style Transfer for One-Shot Medical Image
Segmentation [10.634870214944055]
In medical image segmentation, supervised deep networks' success comes at the cost of requiring abundant labeled data.
We propose a novel volumetric self-supervised learning for data augmentation capable of synthesizing volumetric image-segmentation pairs.
Our work's central tenet benefits from a combined view of one-shot generative learning and the proposed self-supervised training strategy.
arXiv Detail & Related papers (2021-10-05T15:28:42Z) - Colorectal Polyp Classification from White-light Colonoscopy Images via
Domain Alignment [57.419727894848485]
A computer-aided diagnosis system is required to assist accurate diagnosis from colonoscopy images.
Most previous studies at-tempt to develop models for polyp differentiation using Narrow-Band Imaging (NBI) or other enhanced images.
We propose a novel framework based on a teacher-student architecture for the accurate colorectal polyp classification.
arXiv Detail & Related papers (2021-08-05T09:31:46Z) - Self-Supervised Domain Adaptation for Diabetic Retinopathy Grading using
Vessel Image Reconstruction [61.58601145792065]
We learn invariant target-domain features by defining a novel self-supervised task based on retinal vessel image reconstructions.
It can be shown that our approach outperforms existing domain strategies.
arXiv Detail & Related papers (2021-07-20T09:44:07Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Unlabeled Data Guided Semi-supervised Histopathology Image Segmentation [34.45302976822067]
Semi-supervised learning (SSL) based on generative methods has been proven to be effective in utilizing diverse image characteristics.
We propose a new data guided generative method for histopathology image segmentation by leveraging the unlabeled data distributions.
Our method is evaluated on glands and nuclei datasets.
arXiv Detail & Related papers (2020-12-17T02:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.