Self-adaptive Re-weighted Adversarial Domain Adaptation
- URL: http://arxiv.org/abs/2006.00223v2
- Date: Tue, 2 Jun 2020 03:08:41 GMT
- Title: Self-adaptive Re-weighted Adversarial Domain Adaptation
- Authors: Shanshan Wang, Lei Zhang
- Abstract summary: We present a self-adaptive re-weighted adversarial domain adaptation approach.
It tries to enhance domain alignment from the perspective of conditional distribution.
Empirical evidence demonstrates that the proposed model outperforms state of the arts on standard domain adaptation datasets.
- Score: 12.73753413032972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing adversarial domain adaptation methods mainly consider the marginal
distribution and these methods may lead to either under transfer or negative
transfer. To address this problem, we present a self-adaptive re-weighted
adversarial domain adaptation approach, which tries to enhance domain alignment
from the perspective of conditional distribution. In order to promote positive
transfer and combat negative transfer, we reduce the weight of the adversarial
loss for aligned features while increasing the adversarial force for those
poorly aligned measured by the conditional entropy. Additionally, triplet loss
leveraging source samples and pseudo-labeled target samples is employed on the
confusing domain. Such metric loss ensures the distance of the intra-class
sample pairs closer than the inter-class pairs to achieve the class-level
alignment. In this way, the high accurate pseudolabeled target samples and
semantic alignment can be captured simultaneously in the co-training process.
Our method achieved low joint error of the ideal source and target hypothesis.
The expected target error can then be upper bounded following Ben-David's
theorem. Empirical evidence demonstrates that the proposed model outperforms
state of the arts on standard domain adaptation datasets.
Related papers
- Domain Adaptive Object Detection via Balancing Between Self-Training and
Adversarial Learning [19.81071116581342]
Deep learning based object detectors struggle generalizing to a new target domain bearing significant variations in object and background.
Current methods align domains by using image or instance-level adversarial feature alignment.
We propose to leverage model's predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment.
arXiv Detail & Related papers (2023-11-08T16:40:53Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Logit Margin Matters: Improving Transferable Targeted Adversarial Attack
by Logit Calibration [85.71545080119026]
Cross-Entropy (CE) loss function is insufficient to learn transferable targeted adversarial examples.
We propose two simple and effective logit calibration methods, which are achieved by downscaling the logits with a temperature factor and an adaptive margin.
Experiments conducted on the ImageNet dataset validate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2023-03-07T06:42:52Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - SENTRY: Selective Entropy Optimization via Committee Consistency for
Unsupervised Domain Adaptation [14.086066389856173]
We propose a UDA algorithm that judges the reliability of a target instance based on its predictive consistency under a committee of random image transformations.
Our algorithm then selectively minimizes predictive entropy to increase confidence on highly consistent target instances, while maximizing predictive entropy to reduce confidence on highly inconsistent ones.
In combination with pseudo-label based approximate target class balancing, our approach leads to significant improvements over the state-of-the-art on 27/31 domain shifts from standard UDA benchmarks as well as benchmarks designed to stress-test adaptation under label distribution shift.
arXiv Detail & Related papers (2020-12-21T16:24:50Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Implicit Class-Conditioned Domain Alignment for Unsupervised Domain
Adaptation [18.90240379173491]
Current methods for class-conditioned domain alignment aim to explicitly minimize a loss function based on pseudo-label estimations of the target domain.
We propose a method that removes the need for explicit optimization of model parameters from pseudo-labels directly.
We present a sampling-based implicit alignment approach, where the sample selection procedure is implicitly guided by the pseudo-labels.
arXiv Detail & Related papers (2020-06-09T00:20:21Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.