Domain Invariant Adversarial Learning
- URL: http://arxiv.org/abs/2104.00322v1
- Date: Thu, 1 Apr 2021 08:04:10 GMT
- Title: Domain Invariant Adversarial Learning
- Authors: Matan Levi, Idan Attias, Aryeh Kontorovich
- Abstract summary: We present Domain Invariant Adversarial Learning (DIAL) that learns a feature representation which is both robust and domain invariant.
We demonstrate our advantage by improving both robustness and natural accuracy compared to current state-of-the-art adversarial training methods.
- Score: 12.48728566307251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The discovery of adversarial examples revealed one of the most basic
vulnerabilities of deep neural networks. Among the variety of techniques
introduced to tackle this inherent weakness, adversarial training was shown to
be the most common and efficient strategy to achieve robustness. It is usually
done by balancing the robust and natural losses. In this work, we aim to
achieve better trade-off between robust and natural performances by enforcing a
domain invariant feature representation. We present a new adversarial training
method, called Domain Invariant Adversarial Learning (DIAL) that learns a
feature representation which is both robust and domain invariant. DIAL uses a
variant of Domain Adversarial Neural Network (DANN) on the natural domain and
its corresponding adversarial domain. In a case where the source domain
consists of natural examples and the target domain is the adversarially
perturbed examples, our method learns a feature representation constrained not
to discriminate between the natural and adversarial examples, and can therefore
achieve better representation. We demonstrate our advantage by improving both
robustness and natural accuracy compared to current state-of-the-art
adversarial training methods.
Related papers
- Randomized Adversarial Style Perturbations for Domain Generalization [49.888364462991234]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP)
The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains.
We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
arXiv Detail & Related papers (2023-04-04T17:07:06Z) - Heterogeneous Domain Adaptation with Adversarial Neural Representation
Learning: Experiments on E-Commerce and Cybersecurity [7.748670137746999]
Heterogeneous Adversarial Neural Domain Adaptation (HANDA) is designed to maximize the transferability in heterogeneous environments.
Three experiments were conducted to evaluate the performance against the state-of-the-art HDA methods on major image and text e-commerce benchmarks.
arXiv Detail & Related papers (2022-05-05T16:57:36Z) - A Style and Semantic Memory Mechanism for Domain Generalization [108.98041306507372]
Intra-domain style invariance is of pivotal importance in improving the efficiency of domain generalization.
We propose a novel "jury" mechanism, which is particularly effective in learning useful semantic feature commonalities among domains.
Our proposed framework surpasses the state-of-the-art methods by clear margins.
arXiv Detail & Related papers (2021-12-14T16:23:24Z) - Push Stricter to Decide Better: A Class-Conditional Feature Adaptive
Framework for Improving Adversarial Robustness [18.98147977363969]
We propose a Feature Adaptive Adversarial Training (FAAT) to optimize the class-conditional feature adaption across natural data and adversarial examples.
FAAT produces more discriminative features and performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2021-12-01T07:37:56Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z) - Towards Stable and Comprehensive Domain Alignment: Max-Margin
Domain-Adversarial Training [38.12978698952838]
We propose a novel Max-margin Domain-Adversarial Training (MDAT) by designing an Adversarial Reconstruction Network (ARN)
ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures.
Our approach outperforms other state-of-the-art domain alignment methods.
arXiv Detail & Related papers (2020-03-30T07:48:52Z) - Gradually Vanishing Bridge for Adversarial Domain Adaptation [156.46378041408192]
We equip adversarial domain adaptation with Gradually Vanishing Bridge (GVB) mechanism on both generator and discriminator.
On the generator, GVB could not only reduce the overall transfer difficulty, but also reduce the influence of the residual domain-specific characteristics.
On the discriminator, GVB contributes to enhance the discriminating ability, and balance the adversarial training process.
arXiv Detail & Related papers (2020-03-30T01:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.