Single-Shot Domain Adaptation via Target-Aware Generative Augmentation
- URL: http://arxiv.org/abs/2210.16692v1
- Date: Sat, 29 Oct 2022 20:53:57 GMT
- Title: Single-Shot Domain Adaptation via Target-Aware Generative Augmentation
- Authors: Rakshith Subramanyam, Kowshik Thopalli, Spring Berman, Pavan Turaga,
Jayaraman J. Thiagarajan
- Abstract summary: We argue that augmentations utilized by existing methods are insufficient to handle large distribution shifts.
We propose SiSTA (Single-Shot Target Augmentations), which first fine-tunes a generative model from the source domain using a single-shot target.
We find that SiSTA produces improvements as high as 20% over existing baselines under challenging shifts in face detection.
- Score: 21.17396588958938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of adapting models from a source domain using data from any
target domain of interest has gained prominence, thanks to the brittle
generalization in deep neural networks. While several test-time adaptation
techniques have emerged, they typically rely on synthetic data augmentations in
cases of limited target data availability. In this paper, we consider the
challenging setting of single-shot adaptation and explore the design of
augmentation strategies. We argue that augmentations utilized by existing
methods are insufficient to handle large distribution shifts, and hence propose
a new approach SiSTA (Single-Shot Target Augmentations), which first fine-tunes
a generative model from the source domain using a single-shot target, and then
employs novel sampling strategies for curating synthetic target data. Using
experiments with a state-of-the-art domain adaptation method, we find that
SiSTA produces improvements as high as 20\% over existing baselines under
challenging shifts in face attribute detection, and that it performs
competitively to oracle models obtained by training on a larger target dataset.
Related papers
- Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Mitigate Domain Shift by Primary-Auxiliary Objectives Association for
Generalizing Person ReID [39.98444065846305]
ReID models struggle in learning domain-invariant representation solely through training on an instance classification objective.
We introduce a method that guides model learning of the primary ReID instance classification objective by a concurrent auxiliary learning objective on weakly labeled pedestrian saliency detection.
Our model can be extended with the recent test-time diagram to form the PAOA+, which performs on-the-fly optimization against the auxiliary objective.
arXiv Detail & Related papers (2023-10-24T15:15:57Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Enhancing Visual Perception in Novel Environments via Incremental Data
Augmentation Based on Style Transfer [2.516855334706386]
"unknown unknowns" challenge autonomous agent deployment in real-world scenarios.
Our approach enhances visual perception by leveraging the Variational Prototyping (VPE) to adeptly identify and handle novel inputs.
Our findings suggest the potential benefits of incorporating generative models for domain-specific augmentation strategies.
arXiv Detail & Related papers (2023-09-16T03:06:31Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Target-Aware Generative Augmentations for Single-Shot Adaptation [21.840653627684855]
We propose a new approach to adapting models from a source domain to a target domain.
SiSTA fine-tunes a generative model from the source domain using a single-shot target, and then employs novel sampling strategies for curating synthetic target data.
We find that SiSTA produces significantly improved generalization over existing baselines in face detection and multi-class object recognition.
arXiv Detail & Related papers (2023-05-22T17:46:26Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.