Gradually Vanishing Bridge for Adversarial Domain Adaptation
- URL: http://arxiv.org/abs/2003.13183v1
- Date: Mon, 30 Mar 2020 01:36:13 GMT
- Title: Gradually Vanishing Bridge for Adversarial Domain Adaptation
- Authors: Shuhao Cui, Shuhui Wang, Junbao Zhuo, Chi Su, Qingming Huang, Qi Tian
- Abstract summary: We equip adversarial domain adaptation with Gradually Vanishing Bridge (GVB) mechanism on both generator and discriminator.
On the generator, GVB could not only reduce the overall transfer difficulty, but also reduce the influence of the residual domain-specific characteristics.
On the discriminator, GVB contributes to enhance the discriminating ability, and balance the adversarial training process.
- Score: 156.46378041408192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In unsupervised domain adaptation, rich domain-specific characteristics bring
great challenge to learn domain-invariant representations. However, domain
discrepancy is considered to be directly minimized in existing solutions, which
is difficult to achieve in practice. Some methods alleviate the difficulty by
explicitly modeling domain-invariant and domain-specific parts in the
representations, but the adverse influence of the explicit construction lies in
the residual domain-specific characteristics in the constructed
domain-invariant representations. In this paper, we equip adversarial domain
adaptation with Gradually Vanishing Bridge (GVB) mechanism on both generator
and discriminator. On the generator, GVB could not only reduce the overall
transfer difficulty, but also reduce the influence of the residual
domain-specific characteristics in domain-invariant representations. On the
discriminator, GVB contributes to enhance the discriminating ability, and
balance the adversarial training process. Experiments on three challenging
datasets show that our GVB methods outperform strong competitors, and cooperate
well with other adversarial methods. The code is available at
https://github.com/cuishuhao/GVB.
Related papers
- Overcoming Negative Transfer by Online Selection: Distant Domain Adaptation for Fault Diagnosis [42.85741244467877]
The term distant domain adaptation problem' describes the challenge of adapting from a labeled source domain to a significantly disparate unlabeled target domain.
This problem exhibits the risk of negative transfer, where extraneous knowledge from the source domain adversely affects the target domain performance.
In response to this challenge, we propose a novel Online Selective Adversarial Alignment (OSAA) approach.
arXiv Detail & Related papers (2024-05-25T07:17:47Z) - AIR-DA: Adversarial Image Reconstruction for Unsupervised Domain
Adaptive Object Detection [28.22783703278792]
Adrial Image Reconstruction (AIR) as the regularizer to facilitate the adversarial training of the feature extractor.
Our evaluations across several datasets of challenging domain shifts demonstrate that the proposed method outperforms all previous methods.
arXiv Detail & Related papers (2023-03-27T16:51:51Z) - Rethinking Domain Generalization for Face Anti-spoofing: Separability
and Alignment [35.67771212285966]
This work studies the generalization issue of face anti-spoofing (FAS) models on domain gaps, such as image resolution, blurriness and sensor variations.
We formulate this FAS strategy of separability and alignment (SA-FAS) as a problem of invariant risk minimization (IRM)
We demonstrate the effectiveness of SA-FAS on challenging cross-domain FAS datasets and establish state-of-the-art performance.
arXiv Detail & Related papers (2023-03-23T20:34:27Z) - Enhanced Separable Disentanglement for Unsupervised Domain Adaptation [6.942003070153651]
Domain adaptation aims to mitigate the domain gap when transferring knowledge from an existing labeled domain to a new domain.
Existing disentanglement-based methods do not fully consider separation between domain-invariant and domain-specific features.
In this paper, we propose a novel enhanced separable disentanglement model.
arXiv Detail & Related papers (2021-06-22T16:50:53Z) - Invariant Information Bottleneck for Domain Generalization [39.62337297660974]
We propose a novel algorithm that learns a minimally sufficient representation that is invariant across training and testing domains.
By minimizing the mutual information between the representation and inputs, IIB alleviates its reliance on pseudo-invariant features.
The results show that IIB outperforms invariant learning baseline (e.g. IRM) by an average of 2.8% and 3.8% accuracy over two evaluation metrics.
arXiv Detail & Related papers (2021-06-11T12:12:40Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.