Domain Adaptive Diabetic Retinopathy Grading with Model Absence and Flowing Data
- URL: http://arxiv.org/abs/2412.01203v1
- Date: Mon, 02 Dec 2024 07:14:25 GMT
- Title: Domain Adaptive Diabetic Retinopathy Grading with Model Absence and Flowing Data
- Authors: Wenxin Su, Song Tang, Xiaofeng Liu, Xiaojing Yi, Mao Ye, Chunxiao Zu, Jiahao Li, Xiatian Zhu,
- Abstract summary: Domain shift poses a significant challenge in clinical applications, e.g., Diabetic Retinopathy grading.
We propose a novel approach, Generative Unadversarial ExampleS (GUES), which enables adaptation from a data-centric perspective.
- Score: 45.75724873443564
- License:
- Abstract: Domain shift (the difference between source and target domains) poses a significant challenge in clinical applications, e.g., Diabetic Retinopathy (DR) grading. Despite considering certain clinical requirements, like source data privacy, conventional transfer methods are predominantly model-centered and often struggle to prevent model-targeted attacks. In this paper, we address a challenging Online Model-aGnostic Domain Adaptation (OMG-DA) setting, driven by the demands of clinical environments. This setting is characterized by the absence of the model and the flow of target data. To tackle the new challenge, we propose a novel approach, Generative Unadversarial ExampleS (GUES), which enables adaptation from a data-centric perspective. Specifically, we first theoretically reformulate conventional perturbation optimization in a generative way--learning a perturbation generation function with a latent input variable. During model instantiation, we leverage a Variational AutoEncoder to express this function. The encoder with the reparameterization trick predicts the latent input, whilst the decoder is responsible for the generation. Furthermore, the saliency map is selected as pseudo-perturbation labels. Because it not only captures potential lesions but also theoretically provides an upper bound on the function input, enabling the identification of the latent variable. Extensive comparative experiments on DR benchmarks with both frozen pre-trained models and trainable models demonstrate the superiority of GUES, showing robustness even with small batch size.
Related papers
- Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - Reshaping the Online Data Buffering and Organizing Mechanism for Continual Test-Time Adaptation [49.53202761595912]
Continual Test-Time Adaptation involves adapting a pre-trained source model to continually changing unsupervised target domains.
We analyze the challenges of this task: online environment, unsupervised nature, and the risks of error accumulation and catastrophic forgetting.
We propose an uncertainty-aware buffering approach to identify and aggregate significant samples with high certainty from the unsupervised, single-pass data stream.
arXiv Detail & Related papers (2024-07-12T15:48:40Z) - Adaptive Variance Thresholding: A Novel Approach to Improve Existing
Deep Transfer Vision Models and Advance Automatic Knee-Joint Osteoarthritis
Classification [0.11249583407496219]
Knee-Joint Osteoarthritis (KOA) is a prevalent cause of global disability and inherently complex to diagnose.
One promising classification avenue involves applying deep learning methods.
This study proposes a novel paradigm for improving post-training specialized classifiers.
arXiv Detail & Related papers (2023-11-10T00:17:07Z) - Domain-invariant Clinical Representation Learning by Bridging Data Distribution Shift across EMR Datasets [28.59271580918754]
An effective prognostic model could assist physicians in making accurate diagnoses and designing personalized treatment plans.
limited data collection, insufficient clinical experience, and privacy and ethical concerns restrict data availability.
We present a domain-invariant representation learning method that constructs a transition model between source and target datasets.
arXiv Detail & Related papers (2023-10-11T18:32:21Z) - Automatic diagnosis of knee osteoarthritis severity using Swin
transformer [55.01037422579516]
Knee osteoarthritis (KOA) is a widespread condition that can cause chronic pain and stiffness in the knee joint.
We propose an automated approach that employs the Swin Transformer to predict the severity of KOA.
arXiv Detail & Related papers (2023-07-10T09:49:30Z) - Confidence Attention and Generalization Enhanced Distillation for
Continuous Video Domain Adaptation [62.458968086881555]
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to a series of individually available changing target domains.
We propose a Confidence-Attentive network with geneRalization enhanced self-knowledge disTillation (CART) to address the challenge in CVDA.
arXiv Detail & Related papers (2023-03-18T16:40:10Z) - Disentangling Generative Factors of Physical Fields Using Variational
Autoencoders [0.0]
This work explores the use of variational autoencoders (VAEs) for non-linear dimension reduction.
A disentangled decomposition is interpretable and can be transferred to a variety of tasks including generative modeling.
arXiv Detail & Related papers (2021-09-15T16:02:43Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Teacher-Student Domain Adaptation for Biosensor Models [0.0]
We present an approach to domain adaptation, addressing the case where data from the source domain is abundant, labelled data from the target domain is limited or non-existent, and a small amount of paired source-target data is available.
The method is designed for developing deep learning models that detect the presence of medical conditions based on data from consumer-grade portable biosensors.
arXiv Detail & Related papers (2020-03-17T19:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.