Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2104.01286v1
- Date: Sat, 3 Apr 2021 01:33:14 GMT
- Title: Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation
- Authors: Astuti Sharma, Tarun Kalluri, Manmohan Chandraker
- Abstract summary: We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
- Score: 74.71931918541748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain adaptation deals with training models using large scale labeled data
from a specific source domain and then adapting the knowledge to certain target
domains that have few or no labels. Many prior works learn domain agnostic
feature representations for this purpose using a global distribution alignment
objective which does not take into account the finer class specific structure
in the source and target domains. We address this issue in our work and propose
an instance affinity based criterion for source to target transfer during
adaptation, called ILA-DA. We first propose a reliable and efficient method to
extract similar and dissimilar samples across source and target, and utilize a
multi-sample contrastive loss to drive the domain alignment process. ILA-DA
simultaneously accounts for intra-class clustering as well as inter-class
separation among the categories, resulting in less noisy classifier boundaries,
improved transferability and increased accuracy. We verify the effectiveness of
ILA-DA by observing consistent improvements in accuracy over popular domain
adaptation approaches on a variety of benchmark datasets and provide insights
into the proposed alignment approach. Code will be made publicly available at
https://github.com/astuti/ILA-DA.
Related papers
- Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.