Self-Adaptive Transfer Learning for Multicenter Glaucoma Classification
in Fundus Retina Images
- URL: http://arxiv.org/abs/2105.03068v1
- Date: Fri, 7 May 2021 05:20:37 GMT
- Title: Self-Adaptive Transfer Learning for Multicenter Glaucoma Classification
in Fundus Retina Images
- Authors: Yiming Bao, Jun Wang, Tong Li, Linyan Wang, Jianwei Xu, Juan Ye and
Dahong Qian
- Abstract summary: We propose a self-adaptive transfer learning (SATL) strategy to fill the domain gap between multicenter datasets.
Specifically, the encoder of a DL model that is pre-trained on the source domain is used to initialize the encoder of a reconstruction model.
Results demonstrate that the proposed SATL strategy is effective in the domain adaptation task between a private and two public glaucoma diagnosis datasets.
- Score: 9.826586293806837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The early diagnosis and screening of glaucoma are important for patients to
receive treatment in time and maintain eyesight. Nowadays, deep learning (DL)
based models have been successfully used for computer-aided diagnosis (CAD) of
glaucoma from retina fundus images. However, a DL model pre-trained using a
dataset from one hospital center may have poor performance on a dataset from
another new hospital center and therefore its applications in the real scene
are limited. In this paper, we propose a self-adaptive transfer learning (SATL)
strategy to fill the domain gap between multicenter datasets. Specifically, the
encoder of a DL model that is pre-trained on the source domain is used to
initialize the encoder of a reconstruction model. Then, the reconstruction
model is trained using only unlabeled image data from the target domain, which
makes the encoder in the model adapt itself to extract useful high-level
features both for target domain images encoding and glaucoma classification,
simultaneously. Experimental results demonstrate that the proposed SATL
strategy is effective in the domain adaptation task between a private and two
public glaucoma diagnosis datasets, i.e. pri-RFG, REFUGE, and LAG. Moreover,
the proposed strategy is completely independent of the source domain data,
which meets the real scene application and the privacy protection policy.
Related papers
- Advancing UWF-SLO Vessel Segmentation with Source-Free Active Domain Adaptation and a Novel Multi-Center Dataset [11.494899967255142]
Accurate vessel segmentation in UWF-SLO images is crucial for diagnosing retinal diseases.
manually labeling high-resolution UWF-SLO images is an extremely challenging, time-consuming and expensive task.
This study introduces a pioneering framework that leverages a patch-based active domain adaptation approach.
arXiv Detail & Related papers (2024-06-19T15:49:06Z) - Learning to Adapt Foundation Model DINOv2 for Capsule Endoscopy Diagnosis [36.403320243871526]
We introduce a simplified approach called Adapt foundation models with a low-rank adaptation (LoRA) technique for easier customization.
Unlike traditional fine-tuning methods, our strategy includes LoRA layers designed to absorb specific surgical domain knowledge.
Our solution demonstrates that foundation models can be adeptly adapted for capsule endoscopy diagnosis.
arXiv Detail & Related papers (2024-06-15T05:21:33Z) - Source-Free Domain Adaptation of Weakly-Supervised Object Localization Models for Histology [8.984366988153116]
Deep weakly supervised object localization (WSOL) models can be trained to classify histology images according to cancer grade.
A WSOL model initially trained on some labeled source image data can be adapted using unlabeled target data.
In this paper, we focus on source-free (unsupervised) domain adaptation (SFDA), a challenging problem where a pre-trained source model is adapted to a new target domain.
arXiv Detail & Related papers (2024-04-29T21:25:59Z) - Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - Distill-SODA: Distilling Self-Supervised Vision Transformer for
Source-Free Open-Set Domain Adaptation in Computational Pathology [12.828728138651266]
Development of computational pathology models is essential for reducing manual tissue typing from whole slide images.
We propose a practical setting by addressing the above-mentioned challenges in one fell swoop, i.e., source-free open-set domain adaptation.
Our methodology focuses on adapting a pre-trained source model to an unlabeled target dataset.
arXiv Detail & Related papers (2023-07-10T14:36:51Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic
Image Classification [61.656149405657246]
Domain adaptation is effective in image classification tasks where obtaining sufficient label data is challenging.
We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods.
The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-27T14:19:00Z) - Self-Supervised-RCNN for Medical Image Segmentation with Limited Data
Annotation [0.16490701092527607]
We propose an alternative deep learning training strategy based on self-supervised pretraining on unlabeled MRI scans.
Our pretraining approach first, randomly applies different distortions to random areas of unlabeled images and then predicts the type of distortions and loss of information.
The effectiveness of the proposed method for segmentation tasks in different pre-training and fine-tuning scenarios is evaluated.
arXiv Detail & Related papers (2022-07-17T13:28:52Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Self-Supervised Domain Adaptation for Diabetic Retinopathy Grading using
Vessel Image Reconstruction [61.58601145792065]
We learn invariant target-domain features by defining a novel self-supervised task based on retinal vessel image reconstructions.
It can be shown that our approach outperforms existing domain strategies.
arXiv Detail & Related papers (2021-07-20T09:44:07Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.