Target-driven One-Shot Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2305.04628v2
- Date: Mon, 17 Jul 2023 10:35:00 GMT
- Title: Target-driven One-Shot Unsupervised Domain Adaptation
- Authors: Julio Ivan Davila Carrazco, Suvarna Kishorkumar Kadam, Pietro Morerio,
Alessio Del Bue, Vittorio Murino
- Abstract summary: One-Shot Unsupervised Domain Adaptation (OSUDA) aims to adapt to a target domain with only a single unlabeled target sample.
Unlike existing approaches that rely on large labeled source and unlabeled target data, our Target-driven One-Shot UDA approach employs a learnable augmentation strategy guided by the target sample's style.
Our method outperforms or performs comparably to existing OS-UDA methods on the Digits and DomainNet benchmarks.
- Score: 42.230519460503494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce a novel framework for the challenging problem of
One-Shot Unsupervised Domain Adaptation (OSUDA), which aims to adapt to a
target domain with only a single unlabeled target sample. Unlike existing
approaches that rely on large labeled source and unlabeled target data, our
Target-driven One-Shot UDA (TOS-UDA) approach employs a learnable augmentation
strategy guided by the target sample's style to align the source distribution
with the target distribution. Our method consists of three modules: an
augmentation module, a style alignment module, and a classifier. Unlike
existing methods, our augmentation module allows for strong transformations of
the source samples, and the style of the single target sample available is
exploited to guide the augmentation by ensuring perceptual similarity.
Furthermore, our approach integrates augmentation with style alignment,
eliminating the need for separate pre-training on additional datasets. Our
method outperforms or performs comparably to existing OS-UDA methods on the
Digits and DomainNet benchmarks.
Related papers
- StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Semi-supervised Domain Adaptation via Prototype-based Multi-level
Learning [4.232614032390374]
In semi-supervised domain adaptation (SSDA), a few labeled target samples of each class help the model to transfer knowledge representation from the fully labeled source domain to the target domain.
We propose a Prototype-based Multi-level Learning (ProML) framework to better tap the potential of labeled target samples.
arXiv Detail & Related papers (2023-05-04T10:09:30Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation [21.01132797297286]
In one-shot unsupervised domain adaptation, segmentors only see one unlabeled target image during training.
We propose a new OSUDA method that can effectively relieve such computational burden.
Our method achieves new state-of-the-art performance on two commonly used benchmarks for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2021-12-09T02:47:46Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation [43.351728923472464]
One-Shot Unsupervised Domain Adaptation assumes that only one unlabeled target sample can be available when learning to adapt.
Traditional adaptation approaches are prone to failure due to the scarce of unlabeled target data.
We propose a novel Adrial Style Mining approach, which combines the style transfer module and task-specific module into an adversarial manner.
arXiv Detail & Related papers (2020-04-13T16:18:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.