F^2TTA: Free-Form Test-Time Adaptation on Cross-Domain Medical Image Classification via Image-Level Disentangled Prompt Tuning
- URL: http://arxiv.org/abs/2507.02437v1
- Date: Thu, 03 Jul 2025 08:50:56 GMT
- Title: F^2TTA: Free-Form Test-Time Adaptation on Cross-Domain Medical Image Classification via Image-Level Disentangled Prompt Tuning
- Authors: Wei Li, Jingyang Zhang, Lihao Liu, Guoan Wang, Junjun He, Yang Chen, Lixu Gu,
- Abstract summary: Test-Time Adaptation (TTA) has emerged as a promising solution for adapting a source model to unseen medical sites using unlabeled test data.<n>This paper investigates a practical Free-Form Test-Time Adaptation (F$2$TTA) task, where a source model is adapted to such free-form domain fragments.
- Score: 18.58261691911925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test-Time Adaptation (TTA) has emerged as a promising solution for adapting a source model to unseen medical sites using unlabeled test data, due to the high cost of data annotation. Existing TTA methods consider scenarios where data from one or multiple domains arrives in complete domain units. However, in clinical practice, data usually arrives in domain fragments of arbitrary lengths and in random arrival orders, due to resource constraints and patient variability. This paper investigates a practical Free-Form Test-Time Adaptation (F$^{2}$TTA) task, where a source model is adapted to such free-form domain fragments, with shifts occurring between fragments unpredictably. In this setting, these shifts could distort the adaptation process. To address this problem, we propose a novel Image-level Disentangled Prompt Tuning (I-DiPT) framework. I-DiPT employs an image-invariant prompt to explore domain-invariant representations for mitigating the unpredictable shifts, and an image-specific prompt to adapt the source model to each test image from the incoming fragments. The prompts may suffer from insufficient knowledge representation since only one image is available for training. To overcome this limitation, we first introduce Uncertainty-oriented Masking (UoM), which encourages the prompts to extract sufficient information from the incoming image via masked consistency learning driven by the uncertainty of the source model representations. Then, we further propose a Parallel Graph Distillation (PGD) method that reuses knowledge from historical image-specific and image-invariant prompts through parallel graph networks. Experiments on breast cancer and glaucoma classification demonstrate the superiority of our method over existing TTA approaches in F$^{2}$TTA. Code is available at https://github.com/mar-cry/F2TTA.
Related papers
- AutoMiSeg: Automatic Medical Image Segmentation via Test-Time Adaptation of Foundation Models [7.382887784956608]
This paper introduces a zero-shot and automatic segmentation pipeline that combines vision-language and segmentation foundation models.<n>By proper decomposition and test-time adaptation, our fully automatic pipeline performs competitively with weakly-prompted interactive foundation models.
arXiv Detail & Related papers (2025-05-23T14:07:21Z) - Origin Identification for Text-Guided Image-to-Image Diffusion Models [39.234894330025114]
We propose origin IDentification for text-guided Image-to-image Diffusion models (ID$2$)<n>A straightforward solution to ID$2$ involves training a specialized deep embedding model to extract and compare features from both query and reference images.<n>To solve this challenge of the proposed ID$2$ task, we contribute the first dataset and a theoretically guaranteed method.
arXiv Detail & Related papers (2025-01-04T20:34:53Z) - Diffusion-Enhanced Test-time Adaptation with Text and Image Augmentation [67.37146712877794]
IT3A is a novel test-time adaptation method that utilizes a pre-trained generative model for multi-modal augmentation of each test sample from unknown new domains.<n>By combining augmented data from pre-trained vision and language models, we enhance the ability of the model to adapt to unknown new test data.<n>In a zero-shot setting, IT3A outperforms state-of-the-art test-time prompt tuning methods with a 5.50% increase in accuracy.
arXiv Detail & Related papers (2024-12-12T20:01:24Z) - PASS:Test-Time Prompting to Adapt Styles and Semantic Shapes in Medical Image Segmentation [25.419843931497965]
Test-time adaptation (TTA) has emerged as a promising paradigm to handle the domain shifts at test time for medical images.
We propose PASS (Prompting to Adapt Styles and Semantic shapes), which jointly learns two types of prompts.
We demonstrate the superior performance of PASS over state-of-the-art methods on multiple medical image segmentation datasets.
arXiv Detail & Related papers (2024-10-02T14:11:26Z) - Medical Image Segmentation with InTEnt: Integrated Entropy Weighting for
Single Image Test-Time Adaptation [6.964589353845092]
Test-time adaptation (TTA) refers to adapting a trained model to a new domain during testing.
Here, we propose to adapt a medical image segmentation model with only a single unlabeled test image.
Our method, validated on 24 source/target domain splits across 3 medical image datasets surpasses the leading method by 2.9% Dice coefficient on average.
arXiv Detail & Related papers (2024-02-14T22:26:07Z) - Each Test Image Deserves A Specific Prompt: Continual Test-Time Adaptation for 2D Medical Image Segmentation [14.71883381837561]
Cross-domain distribution shift is a significant obstacle to deploying the pre-trained semantic segmentation model in real-world applications.
Test-time adaptation has proven its effectiveness in tackling the cross-domain distribution shift during inference.
We propose the Visual Prompt-based Test-Time Adaptation (VPTTA) method to train a specific prompt for each test image to align the statistics in the batch normalization layers.
arXiv Detail & Related papers (2023-11-30T09:03:47Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - On-the-Fly Test-time Adaptation for Medical Image Segmentation [63.476899335138164]
Adapting the source model to target data distribution at test-time is an efficient solution for the data-shift problem.
We propose a new framework called Adaptive UNet where each convolutional block is equipped with an adaptive batch normalization layer.
During test-time, the model takes in just the new test image and generates a domain code to adapt the features of source model according to the test data.
arXiv Detail & Related papers (2022-03-10T18:51:29Z) - SITA: Single Image Test-time Adaptation [48.789568233682296]
In Test-time Adaptation (TTA), given a model trained on some source data, the goal is to adapt it to make better predictions for test instances from a different distribution.
We consider TTA in a more pragmatic setting which we refer to as SITA (Single Image Test-time Adaptation)
Here, when making each prediction, the model has access only to the given single test instance, rather than a batch of instances.
We propose a novel approach AugBN for the SITA setting that requires only forward-preserving propagation.
arXiv Detail & Related papers (2021-12-04T15:01:35Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.