Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2004.06042v1
- Date: Mon, 13 Apr 2020 16:18:46 GMT
- Title: Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation
- Authors: Yawei Luo, Ping Liu, Tao Guan, Junqing Yu, Yi Yang
- Abstract summary: One-Shot Unsupervised Domain Adaptation assumes that only one unlabeled target sample can be available when learning to adapt.
Traditional adaptation approaches are prone to failure due to the scarce of unlabeled target data.
We propose a novel Adrial Style Mining approach, which combines the style transfer module and task-specific module into an adversarial manner.
- Score: 43.351728923472464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We aim at the problem named One-Shot Unsupervised Domain Adaptation. Unlike
traditional Unsupervised Domain Adaptation, it assumes that only one unlabeled
target sample can be available when learning to adapt. This setting is
realistic but more challenging, in which conventional adaptation approaches are
prone to failure due to the scarce of unlabeled target data. To this end, we
propose a novel Adversarial Style Mining approach, which combines the style
transfer module and task-specific module into an adversarial manner.
Specifically, the style transfer module iteratively searches for harder
stylized images around the one-shot target sample according to the current
learning state, leading the task model to explore the potential styles that are
difficult to solve in the almost unseen target domain, thus boosting the
adaptation performance in a data-scarce scenario. The adversarial learning
framework makes the style transfer module and task-specific module benefit each
other during the competition. Extensive experiments on both cross-domain
classification and segmentation benchmarks verify that ASM achieves
state-of-the-art adaptation performance under the challenging one-shot setting.
Related papers
- OSSA: Unsupervised One-Shot Style Adaptation [41.71187047855695]
We introduce One-Shot Style Adaptation (OSSA), a novel unsupervised domain adaptation method for object detection.
OSSA generates diverse target styles by perturbing the style statistics derived from a single target image.
We show that OSSA establishes a new state-of-the-art among one-shot domain adaptation methods by a significant margin.
arXiv Detail & Related papers (2024-10-01T17:43:57Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Fast One-Stage Unsupervised Domain Adaptive Person Search [17.164485293539833]
Unsupervised person search aims to localize a particular target person from a gallery set of scene images without annotations.
We propose a Fast One-stage Unsupervised person Search (FOUS) which integrates complementary domain adaptaion with label adaptaion.
FOUS can achieve the state-of-the-art (SOTA) performance on two benchmark datasets, CUHK-SYSU and PRW.
arXiv Detail & Related papers (2024-05-05T07:15:47Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Target-driven One-Shot Unsupervised Domain Adaptation [42.230519460503494]
One-Shot Unsupervised Domain Adaptation (OSUDA) aims to adapt to a target domain with only a single unlabeled target sample.
Unlike existing approaches that rely on large labeled source and unlabeled target data, our Target-driven One-Shot UDA approach employs a learnable augmentation strategy guided by the target sample's style.
Our method outperforms or performs comparably to existing OS-UDA methods on the Digits and DomainNet benchmarks.
arXiv Detail & Related papers (2023-05-08T11:10:25Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation [21.01132797297286]
In one-shot unsupervised domain adaptation, segmentors only see one unlabeled target image during training.
We propose a new OSUDA method that can effectively relieve such computational burden.
Our method achieves new state-of-the-art performance on two commonly used benchmarks for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2021-12-09T02:47:46Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.