COCA: Classifier-Oriented Calibration via Textual Prototype for
Source-Free Universal Domain Adaptation
- URL: http://arxiv.org/abs/2308.10450v2
- Date: Mon, 11 Mar 2024 10:45:29 GMT
- Title: COCA: Classifier-Oriented Calibration via Textual Prototype for
Source-Free Universal Domain Adaptation
- Authors: Xinghong Liu, Yi Zhou, Tao Zhou, Chun-Mei Feng, Ling Shao
- Abstract summary: Universal domain adaptation (UniDA) aims to address domain and category shifts across data sources.
SF-UniDA methods eliminate the need for direct access to source samples when performing adaptation to the target domain.
Existing SF-UniDA methods still require an extensive quantity of labeled source samples to train a source model.
We present a novel plug-and-play calibration (COCA) method to tackle this issue.
- Score: 58.53682309436275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Universal domain adaptation (UniDA) aims to address domain and category
shifts across data sources. Recently, due to more stringent data restrictions,
researchers have introduced source-free UniDA (SF-UniDA). SF-UniDA methods
eliminate the need for direct access to source samples when performing
adaptation to the target domain. However, existing SF-UniDA methods still
require an extensive quantity of labeled source samples to train a source
model, resulting in significant labeling costs. To tackle this issue, we
present a novel plug-and-play classifier-oriented calibration (COCA) method.
COCA, which exploits textual prototypes, is designed for the source models
based on few-shot learning with vision-language models (VLMs). It endows the
VLM-powered few-shot learners, which are built for closed-set classification,
with the unknown-aware ability to distinguish common and unknown classes in the
SF-UniDA scenario. Crucially, COCA is a new paradigm to tackle SF-UniDA
challenges based on VLMs, which focuses on classifier instead of image encoder
optimization. Experiments show that COCA outperforms state-of-the-art UniDA and
SF-UniDA models.
Related papers
- Recall and Refine: A Simple but Effective Source-free Open-set Domain Adaptation Framework [9.03028904066824]
Open-set Domain Adaptation (OSDA) aims to adapt a model from a labeled source domain to an unlabeled target domain.
We propose Recall and Refine (RRDA), a novel SF-OSDA framework designed to address limitations by explicitly learning features for target-private unknown classes.
arXiv Detail & Related papers (2024-11-19T15:18:50Z) - LEAD: Learning Decomposition for Source-free Universal Domain Adaptation [17.94547232392788]
We propose a new idea of LEArning Decomposition, which decouples features into source-known and -unknown components to identify target-private data.
In the OPDA scenario on VisDA dataset, LEAD outperforms GLC by 3.5% overall H-score and reduces 75% time to derive pseudo-labeling decision boundaries.
arXiv Detail & Related papers (2024-03-06T03:08:20Z) - Unknown Sample Discovery for Source Free Open Set Domain Adaptation [1.8130068086063336]
Open Set Domain Adaptation (OSDA) aims to adapt a model trained on a source domain to a target domain that undergoes distribution shift.
We introduce Unknown Sample Discovery (USD) as an SF-OSDA method that utilizes a temporally ensembled teacher model to conduct known-unknown target sample separation.
arXiv Detail & Related papers (2023-12-05T20:07:51Z) - Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - Spatio-Temporal Pixel-Level Contrastive Learning-based Source-Free
Domain Adaptation for Video Semantic Segmentation [117.39092621796753]
Source Domain Adaptation (SFDA) setup aims to adapt a source-trained model to the target domain without accessing source data.
A novel method that takes full advantage of correlations oftemporal-information to tackle the absence of source data is proposed.
Experiments show that PixelL achieves un-of-the-art performance on benchmarks compared to current UDA and SFDA approaches.
arXiv Detail & Related papers (2023-03-25T05:06:23Z) - Upcycling Models under Domain and Category Shift [95.22147885947732]
We introduce an innovative global and local clustering learning technique (GLC)
We design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes.
Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.
arXiv Detail & Related papers (2023-03-13T13:44:04Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.