Improve Unsupervised Domain Adaptation with Mixup Training
- URL: http://arxiv.org/abs/2001.00677v1
- Date: Fri, 3 Jan 2020 01:21:27 GMT
- Title: Improve Unsupervised Domain Adaptation with Mixup Training
- Authors: Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, Liu Ren
- Abstract summary: We study the problem of utilizing a relevant source domain with abundant labels to build predictive modeling for an unannotated target domain.
Recent work observe that the popular adversarial approach of learning domain-invariant features is insufficient to achieve desirable target domain performance.
We propose to enforce training constraints across domains using mixup formulation to directly address the generalization performance for target data.
- Score: 18.329571222689562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation studies the problem of utilizing a relevant
source domain with abundant labels to build predictive modeling for an
unannotated target domain. Recent work observe that the popular adversarial
approach of learning domain-invariant features is insufficient to achieve
desirable target domain performance and thus introduce additional training
constraints, e.g. cluster assumption. However, these approaches impose the
constraints on source and target domains individually, ignoring the important
interplay between them. In this work, we propose to enforce training
constraints across domains using mixup formulation to directly address the
generalization performance for target data. In order to tackle potentially huge
domain discrepancy, we further propose a feature-level consistency regularizer
to facilitate the inter-domain constraint. When adding intra-domain mixup and
domain adversarial learning, our general framework significantly improves
state-of-the-art performance on several important tasks from both image
classification and human activity recognition.
Related papers
- Complementary Domain Adaptation and Generalization for Unsupervised
Continual Domain Shift Learning [4.921899151930171]
Unsupervised continual domain shift learning is a significant challenge in real-world applications.
We propose Complementary Domain Adaptation and Generalization (CoDAG), a simple yet effective learning framework.
Our approach is model-agnostic, meaning that it is compatible with any existing domain adaptation and generalization algorithms.
arXiv Detail & Related papers (2023-03-28T09:05:15Z) - AIR-DA: Adversarial Image Reconstruction for Unsupervised Domain
Adaptive Object Detection [28.22783703278792]
Adrial Image Reconstruction (AIR) as the regularizer to facilitate the adversarial training of the feature extractor.
Our evaluations across several datasets of challenging domain shifts demonstrate that the proposed method outperforms all previous methods.
arXiv Detail & Related papers (2023-03-27T16:51:51Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.