Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency
- URL: http://arxiv.org/abs/2101.12727v1
- Date: Fri, 29 Jan 2021 18:40:17 GMT
- Title: Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency
- Authors: Samarth Mishra, Kate Saenko, Venkatesh Saligrama
- Abstract summary: Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
- Score: 93.89773386634717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual domain adaptation involves learning to classify images from a target
visual domain using labels available in a different source domain. A range of
prior work uses adversarial domain alignment to try and learn a domain
invariant feature space, where a good source classifier can perform well on
target data. This however, can lead to errors where class A features in the
target domain get aligned to class B features in source. We show that in the
presence of a few target labels, simple techniques like self-supervision (via
rotation prediction) and consistency regularization can be effective without
any adversarial alignment to learn a good target classifier. Our Pretraining
and Consistency (PAC) approach, can achieve state of the art accuracy on this
semi-supervised domain adaptation task, surpassing multiple adversarial domain
alignment methods, across multiple datasets. Notably, it outperforms all recent
approaches by 3-5% on the large and challenging DomainNet benchmark, showing
the strength of these simple techniques in fixing errors made by adversarial
alignment.
Related papers
- Attention-based Class-Conditioned Alignment for Multi-Source Domain Adaptation of Object Detectors [11.616494893839757]
Domain adaptation methods for object detection (OD) strive to mitigate the impact of distribution shifts by promoting feature alignment across source and target domains.
Most state-of-the-art MSDA methods for OD perform feature alignment in a class-agnostic manner.
We propose an attention-based class-conditioned alignment method for MSDA that aligns instances of each object category across domains.
arXiv Detail & Related papers (2024-03-14T23:31:41Z) - CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement [84.10513481953583]
unsupervised domain adaptation (UDA) focuses on the selection of good pseudo-labels as surrogates for the missing labels in the target data.
source domain bias that deteriorates the pseudo-labels can still exist since the shared network of the source and target domains are typically used for the pseudo-label selections.
We propose CA-UDA to improve the quality of the pseudo-labels and UDA results with optimal assignment, a pseudo-label refinement strategy and class-aware domain alignment.
arXiv Detail & Related papers (2022-05-26T18:45:04Z) - Domain Adaptation via Prompt Learning [39.97105851723885]
Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain.
We introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL)
arXiv Detail & Related papers (2022-02-14T13:25:46Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Classes Matter: A Fine-grained Adversarial Approach to Cross-domain
Semantic Segmentation [95.10255219396109]
We propose a fine-grained adversarial learning strategy for class-level feature alignment.
We adopt a fine-grained domain discriminator that not only plays as a domain distinguisher, but also differentiates domains at class level.
An analysis with Class Center Distance (CCD) validates that our fine-grained adversarial strategy achieves better class-level alignment.
arXiv Detail & Related papers (2020-07-17T20:50:59Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.