ConDA: Continual Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2103.11056v1
- Date: Fri, 19 Mar 2021 23:20:41 GMT
- Title: ConDA: Continual Unsupervised Domain Adaptation
- Authors: Abu Md Niamul Taufique, Chowdhury Sadman Jahan, Andreas Savakis
- Abstract summary: Domain Adaptation (DA) techniques are important for overcoming the domain shift between the source domain used for training and the target domain where testing takes place.
Current DA methods assume that the entire target domain is available during adaptation, which may not hold in practice.
This paper considers a more realistic scenario, where target data become available in smaller batches and adaptation on the entire target domain is not feasible.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain Adaptation (DA) techniques are important for overcoming the domain
shift between the source domain used for training and the target domain where
testing takes place. However, current DA methods assume that the entire target
domain is available during adaptation, which may not hold in practice. This
paper considers a more realistic scenario, where target data become available
in smaller batches and adaptation on the entire target domain is not feasible.
In our work, we introduce a new, data-constrained DA paradigm where unlabeled
target samples are received in batches and adaptation is performed continually.
We propose a novel source-free method for continual unsupervised domain
adaptation that utilizes a buffer for selective replay of previously seen
samples. In our continual DA framework, we selectively mix samples from
incoming batches with data stored in a buffer using buffer management
strategies and use the combination to incrementally update our model. We
evaluate the classification performance of the continual DA approach with
state-of-the-art DA methods based on the entire target domain. Our results on
three popular DA datasets demonstrate that our method outperforms many existing
state-of-the-art DA methods with access to the entire target domain during
adaptation.
Related papers
- Stratified Domain Adaptation: A Progressive Self-Training Approach for Scene Text Recognition [1.2878987353423252]
Unsupervised domain adaptation (UDA) has become increasingly prevalent in scene text recognition (STR)
We introduce the Stratified Domain Adaptation (StrDA) approach, which examines the gradual escalation of the domain gap for the learning process.
We propose a novel method for employing domain discriminators to estimate the out-of-distribution and domain discriminative levels of data samples.
arXiv Detail & Related papers (2024-10-13T16:40:48Z) - SIDE: Self-supervised Intermediate Domain Exploration for Source-free
Domain Adaptation [36.470026809824674]
Domain adaptation aims to alleviate the domain shift when transferring the knowledge learned from the source domain to the target domain.
Due to privacy issues, source-free domain adaptation (SFDA) has recently become very demanding yet challenging.
This paper proposes self-supervised intermediate domain exploration (SIDE) that effectively bridges the domain gap with an intermediate domain.
arXiv Detail & Related papers (2023-10-13T07:50:37Z) - Transcending Domains through Text-to-Image Diffusion: A Source-Free
Approach to Domain Adaptation [6.649910168731417]
Domain Adaptation (DA) is a method for enhancing a model's performance on a target domain with inadequate annotated data.
We propose a novel framework for SFDA that generates source data using a text-to-image diffusion model trained on the target domain samples.
arXiv Detail & Related papers (2023-10-02T23:38:17Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Self-Domain Adaptation for Face Anti-Spoofing [31.441928816043536]
We propose a self-domain adaptation framework to leverage the unlabeled test domain data at inference.
A meta-learning based adaptor learning algorithm is proposed using the data of multiple source domains at the training step.
arXiv Detail & Related papers (2021-02-24T08:46:39Z) - Casting a BAIT for Offline and Online Source-free Domain Adaptation [51.161476418834766]
We address the source-free domain adaptation (SFDA) problem, where only the source model is available during adaptation to the target domain.
Inspired by diverse classifier based domain adaptation methods, in this paper we introduce a second classifier.
When adapting to the target domain, the additional classifier from source is expected to find misclassified features.
Our method surpasses by a large margin other SFDA methods under online source-free domain adaptation setting.
arXiv Detail & Related papers (2020-10-23T14:18:42Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.