Concurrent Subsidiary Supervision for Unsupervised Source-Free Domain
Adaptation
- URL: http://arxiv.org/abs/2207.13247v1
- Date: Wed, 27 Jul 2022 02:25:09 GMT
- Title: Concurrent Subsidiary Supervision for Unsupervised Source-Free Domain
Adaptation
- Authors: Jogendra Nath Kundu, Suvaansh Bhambri, Akshay Kulkarni, Hiran Sarkar,
Varun Jampani, R. Venkatesh Babu
- Abstract summary: We develop a novel process of sticker intervention and cast sticker classification as a supervised subsidiary DA problem concurrent to the goal task unsupervised DA.
Our approach not only improves goal task adaptation performance, but also facilitates privacy-oriented source-free DA.
- Score: 58.431124236254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prime challenge in unsupervised domain adaptation (DA) is to mitigate the
domain shift between the source and target domains. Prior DA works show that
pretext tasks could be used to mitigate this domain shift by learning domain
invariant representations. However, in practice, we find that most existing
pretext tasks are ineffective against other established techniques. Thus, we
theoretically analyze how and when a subsidiary pretext task could be leveraged
to assist the goal task of a given DA problem and develop objective subsidiary
task suitability criteria. Based on this criteria, we devise a novel process of
sticker intervention and cast sticker classification as a supervised subsidiary
DA problem concurrent to the goal task unsupervised DA. Our approach not only
improves goal task adaptation performance, but also facilitates
privacy-oriented source-free DA i.e. without concurrent source-target access.
Experiments on the standard Office-31, Office-Home, DomainNet, and VisDA
benchmarks demonstrate our superiority for both single-source and multi-source
source-free DA. Our approach also complements existing non-source-free works,
achieving leading performance.
Related papers
- Prompt-based Distribution Alignment for Unsupervised Domain Adaptation [42.77798810726824]
We experimentally demonstrate that the unsupervised-trained visual-language models (VLMs) can significantly reduce the distribution discrepancy between source and target domains.
A major challenge for directly deploying such models on downstream UDA tasks is prompt engineering.
We propose a Prompt-based Distribution Alignment (PDA) method to incorporate the domain knowledge into prompt learning.
arXiv Detail & Related papers (2023-12-15T06:15:04Z) - Continual Source-Free Unsupervised Domain Adaptation [37.060694803551534]
Existing Source-free Unsupervised Domain Adaptation approaches exhibit catastrophic forgetting.
We propose a Continual SUDA (C-SUDA) framework to cope with the challenge of SUDA in a continual learning setting.
arXiv Detail & Related papers (2023-04-14T20:11:05Z) - Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Generalize then Adapt: Source-Free Domain Adaptive Semantic Segmentation [78.38321096371106]
Prior arts assume concurrent access to both labeled source and unlabeled target, making them unsuitable for scenarios demanding source-free adaptation.
In this work, we enable source-free DA by partitioning the task into two: a) source-only domain generalization and b) source-free target adaptation.
We introduce a novel conditional prior-enforcing auto-encoder that discourages spatial irregularities, thereby enhancing the pseudo-label quality.
arXiv Detail & Related papers (2021-08-25T14:18:59Z) - Adapting Off-the-Shelf Source Segmenter for Target Medical Image
Segmentation [12.703234995718372]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled and unseen target domain.
Access to the source domain data at the adaptation stage is often limited, due to data storage or privacy issues.
We propose to adapt an off-the-shelf" segmentation model pre-trained in the source domain to the target domain.
arXiv Detail & Related papers (2021-06-23T16:16:55Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.