Consistent Posterior Distributions under Vessel-Mixing: A Regularization
for Cross-Domain Retinal Artery/Vein Classification
- URL: http://arxiv.org/abs/2103.09097v1
- Date: Tue, 16 Mar 2021 14:18:35 GMT
- Title: Consistent Posterior Distributions under Vessel-Mixing: A Regularization
for Cross-Domain Retinal Artery/Vein Classification
- Authors: Chenxin Li, Yunlong Zhang, Zhehan Liang, Wenao Ma, Yue Huang, Xinghao
Ding
- Abstract summary: We propose a vessel-mixing based consistency regularization framework, for cross-domain learning in retinal A/V classification.
Our method achieves the state-of-the-art cross-domain performance, which is also close to the upper bound obtained by fully supervised learning on target domain.
- Score: 30.30848090813239
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Retinal artery/vein (A/V) classification is a critical technique for
diagnosing diabetes and cardiovascular diseases. Although deep learning based
methods achieve impressive results in A/V classification, their performances
usually degrade severely when being directly applied to another database, due
to the domain shift, e.g., caused by the variations in imaging protocols. In
this paper, we propose a novel vessel-mixing based consistency regularization
framework, for cross-domain learning in retinal A/V classification. Specially,
to alleviate the severe bias to source domain, based on the label smooth prior,
the model is regularized to give consistent predictions for unlabeled
target-domain inputs that are under perturbation. This consistency
regularization implicitly introduces a mechanism where the model and the
perturbation is opponent to each other, where the model is pushed to be robust
enough to cope with the perturbation. Thus, we investigate a more difficult
opponent to further inspire the robustness of model, in the scenario of retinal
A/V, called vessel-mixing perturbation. Specially, it effectively disturbs the
fundus images especially the vessel structures by mixing two images regionally.
We conduct extensive experiments on cross-domain A/V classification using four
public datasets, which are collected by diverse institutions and imaging
devices. The results demonstrate that our method achieves the state-of-the-art
cross-domain performance, which is also close to the upper bound obtained by
fully supervised learning on target domain.
Related papers
- Generalizing to Unseen Domains in Diabetic Retinopathy with Disentangled Representations [32.7667209371645]
Existing models experience notable performance degradation on unseen domains due to domain shifts.
We propose a novel framework where representations of paired data from different domains are decoupled into semantic features and domain noise.
The resulting augmented representation comprises original retinal semantics and domain noise from other domains, aiming to generate enhanced representations aligned with real-world clinical needs.
arXiv Detail & Related papers (2024-06-10T15:43:56Z) - Enhancing AI Diagnostics: Autonomous Lesion Masking via Semi-Supervised Deep Learning [1.4053129774629076]
This study presents an unsupervised domain adaptation method aimed at autonomously generating image masks outlining regions of interest (ROIs) for differentiating breast lesions in breast ultrasound (US) imaging.
Our semi-supervised learning approach utilizes a primitive model trained on a small public breast US dataset with true annotations.
This model is then iteratively refined for the domain adaptation task, generating pseudo-masks for our private, unannotated breast US dataset.
arXiv Detail & Related papers (2024-04-18T18:25:00Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Unsupervised Domain Adaptation for Low-dose CT Reconstruction via Bayesian Uncertainty Alignment [32.632944734192435]
Low-dose computed tomography (LDCT) image reconstruction techniques can reduce patient radiation exposure while maintaining acceptable imaging quality.
Deep learning is widely used in this problem, but the performance of testing data is often degraded in clinical scenarios.
Unsupervised domain adaptation (UDA) of LDCT reconstruction has been proposed to solve this problem through distribution alignment.
arXiv Detail & Related papers (2023-02-26T07:10:09Z) - Improving Mitosis Detection Via UNet-based Adversarial Domain
Homogenizer [1.7298084639157258]
This paper proposes a domain homogenizer for mitosis detection that attempts to alleviate domain differences in histology images via adversarial reconstruction of input images.
We demonstrate our domain homogenizer's effectiveness by observing the reduction in domain differences between the preprocessed images.
Using this homogenizer, along with a subsequent retina-net object detector, we were able to outperform the baselines of the 2021 MIDOG challenge in terms of average precision of the detected mitotic figures.
arXiv Detail & Related papers (2022-09-15T11:15:57Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Self-Supervised Domain Adaptation for Diabetic Retinopathy Grading using
Vessel Image Reconstruction [61.58601145792065]
We learn invariant target-domain features by defining a novel self-supervised task based on retinal vessel image reconstructions.
It can be shown that our approach outperforms existing domain strategies.
arXiv Detail & Related papers (2021-07-20T09:44:07Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.