Coupling Adversarial Learning with Selective Voting Strategy for
Distribution Alignment in Partial Domain Adaptation
- URL: http://arxiv.org/abs/2207.08145v1
- Date: Sun, 17 Jul 2022 11:34:56 GMT
- Title: Coupling Adversarial Learning with Selective Voting Strategy for
Distribution Alignment in Partial Domain Adaptation
- Authors: Sandipan Choudhuri, Hemanth Venkateswara, Arunabha Sen
- Abstract summary: Partial domain adaptation setup caters to a realistic scenario by relaxing the identical label set assumption.
We devise a mechanism for strategic selection of highly-confident target samples essential for the estimation of class-native weights.
- Score: 6.5991141403378215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In contrast to a standard closed-set domain adaptation task, partial domain
adaptation setup caters to a realistic scenario by relaxing the identical label
set assumption. The fact of source label set subsuming the target label set,
however, introduces few additional obstacles as training on private source
category samples thwart relevant knowledge transfer and mislead the
classification process. To mitigate these issues, we devise a mechanism for
strategic selection of highly-confident target samples essential for the
estimation of class-importance weights. Furthermore, we capture
class-discriminative and domain-invariant features by coupling the process of
achieving compact and distinct class distributions with an adversarial
objective. Experimental findings over numerous cross-domain classification
tasks demonstrate the potential of the proposed technique to deliver superior
and comparable accuracy over existing methods.
Related papers
- Adversarial Semi-Supervised Domain Adaptation for Semantic Segmentation:
A New Role for Labeled Target Samples [7.199108088621308]
We design new training objective losses for cases when labeled target data behave as source samples or as real target samples.
To support our approach, we consider a complementary method that mixes source and labeled target data, then applies the same adaptation process.
We illustrate our findings through extensive experiments on the benchmarks GTA5, SYNTHIA, and Cityscapes.
arXiv Detail & Related papers (2023-12-12T15:40:22Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Conditional Support Alignment for Domain Adaptation with Label Shift [8.819673391477034]
Unlabelled domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on labeled samples on the source domain and unsupervised ones in the target domain.
We propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions.
arXiv Detail & Related papers (2023-05-29T05:20:18Z) - Domain-Invariant Feature Alignment Using Variational Inference For
Partial Domain Adaptation [6.04077629908308]
The proposed technique delivers superior and comparable accuracy to existing methods.
The experimental findings in numerous cross-domain classification tasks demonstrate that the proposed technique delivers superior and comparable accuracy to existing methods.
arXiv Detail & Related papers (2022-12-03T10:39:14Z) - Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation [85.6961770631173]
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them.
We propose a novel approach called Cross-domain Adaptive Clustering to address this problem.
arXiv Detail & Related papers (2021-04-19T16:07:32Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Class Distribution Alignment for Adversarial Domain Adaptation [32.95056492475652]
Conditional ADversarial Image Translation (CADIT) is proposed to explicitly align the class distributions given samples between the two domains.
It integrates a discriminative structure-preserving loss and a joint adversarial generation loss.
Our approach achieves superior classification in the target domain when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-04-20T15:58:11Z) - A Sample Selection Approach for Universal Domain Adaptation [94.80212602202518]
We study the problem of unsupervised domain adaption in the universal scenario.
Only some of the classes are shared between the source and target domains.
We present a scoring scheme that is effective in identifying the samples of the shared classes.
arXiv Detail & Related papers (2020-01-14T22:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.