Domain-Invariant Feature Alignment Using Variational Inference For
Partial Domain Adaptation
- URL: http://arxiv.org/abs/2212.01590v1
- Date: Sat, 3 Dec 2022 10:39:14 GMT
- Title: Domain-Invariant Feature Alignment Using Variational Inference For
Partial Domain Adaptation
- Authors: Sandipan Choudhuri, Suli Adeniye, Arunabha Sen, Hemanth Venkateswara
- Abstract summary: The proposed technique delivers superior and comparable accuracy to existing methods.
The experimental findings in numerous cross-domain classification tasks demonstrate that the proposed technique delivers superior and comparable accuracy to existing methods.
- Score: 6.04077629908308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The standard closed-set domain adaptation approaches seek to mitigate
distribution discrepancies between two domains under the constraint of both
sharing identical label sets. However, in realistic scenarios, finding an
optimal source domain with identical label space is a challenging task. Partial
domain adaptation alleviates this problem of procuring a labeled dataset with
identical label space assumptions and addresses a more practical scenario where
the source label set subsumes the target label set. This, however, presents a
few additional obstacles during adaptation. Samples with categories private to
the source domain thwart relevant knowledge transfer and degrade model
performance. In this work, we try to address these issues by coupling
variational information and adversarial learning with a pseudo-labeling
technique to enforce class distribution alignment and minimize the transfer of
superfluous information from the source samples. The experimental findings in
numerous cross-domain classification tasks demonstrate that the proposed
technique delivers superior and comparable accuracy to existing methods.
Related papers
- Domain Adaptation Using Pseudo Labels [16.79672078512152]
In the absence of labeled target data, unsupervised domain adaptation approaches seek to align the marginal distributions of the source and target domains.
We deploy a pretrained network to determine accurate labels for the target domain using a multi-stage pseudo-label refinement procedure.
Our results on multiple datasets demonstrate the effectiveness of our simple procedure in comparison with complex state-of-the-art techniques.
arXiv Detail & Related papers (2024-02-09T22:15:11Z) - Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Coupling Adversarial Learning with Selective Voting Strategy for
Distribution Alignment in Partial Domain Adaptation [6.5991141403378215]
Partial domain adaptation setup caters to a realistic scenario by relaxing the identical label set assumption.
We devise a mechanism for strategic selection of highly-confident target samples essential for the estimation of class-native weights.
arXiv Detail & Related papers (2022-07-17T11:34:56Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Open Set Domain Adaptation by Extreme Value Theory [22.826118321715455]
We tackle the open set domain adaptation problem under the assumption that the source and the target label spaces only partially overlap.
We propose an instance-level reweighting strategy for domain adaptation where the weights indicate the likelihood of a sample belonging to known classes.
Experiments on conventional domain adaptation datasets show that the proposed method outperforms the state-of-the-art models.
arXiv Detail & Related papers (2020-12-22T19:31:32Z) - Select, Label, and Mix: Learning Discriminative Invariant Feature
Representations for Partial Domain Adaptation [55.73722120043086]
We develop a "Select, Label, and Mix" (SLM) framework to learn discriminative invariant feature representations for partial domain adaptation.
First, we present a simple yet efficient "select" module that automatically filters out outlier source samples to avoid negative transfer.
Second, the "label" module iteratively trains the classifier using both the labeled source domain data and the generated pseudo-labels for the target domain to enhance the discriminability of the latent space.
arXiv Detail & Related papers (2020-12-06T19:29:32Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Simultaneous Semantic Alignment Network for Heterogeneous Domain
Adaptation [67.37606333193357]
We propose aSimultaneous Semantic Alignment Network (SSAN) to simultaneously exploit correlations among categories and align the centroids for each category across domains.
By leveraging target pseudo-labels, a robust triplet-centroid alignment mechanism is explicitly applied to align feature representations for each category.
Experiments on various HDA tasks across text-to-image, image-to-image and text-to-text successfully validate the superiority of our SSAN against state-of-the-art HDA methods.
arXiv Detail & Related papers (2020-08-04T16:20:37Z) - Class Distribution Alignment for Adversarial Domain Adaptation [32.95056492475652]
Conditional ADversarial Image Translation (CADIT) is proposed to explicitly align the class distributions given samples between the two domains.
It integrates a discriminative structure-preserving loss and a joint adversarial generation loss.
Our approach achieves superior classification in the target domain when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-04-20T15:58:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.