Progressive Domain Expansion Network for Single Domain Generalization
- URL: http://arxiv.org/abs/2103.16050v1
- Date: Tue, 30 Mar 2021 03:31:55 GMT
- Title: Progressive Domain Expansion Network for Single Domain Generalization
- Authors: Lei Li, Ke Gao, Juan Cao, Ziyao Huang, Yepeng Weng, Xiaoyue Mi,
Zhengze Yu, Xiaoya Li, Boyang xia
- Abstract summary: We propose a novel learning framework called progressive domain expansion network (PDEN) for single domain generalization.
PDEN can achieve up to 15.28% improvement compared with the state-of-the-art single-domain generalization methods.
- Score: 12.962460627678555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single domain generalization is a challenging case of model generalization,
where the models are trained on a single domain and tested on other unseen
domains. A promising solution is to learn cross-domain invariant
representations by expanding the coverage of the training domain. These methods
have limited generalization performance gains in practical applications due to
the lack of appropriate safety and effectiveness constraints. In this paper, we
propose a novel learning framework called progressive domain expansion network
(PDEN) for single domain generalization. The domain expansion subnetwork and
representation learning subnetwork in PDEN mutually benefit from each other by
joint learning. For the domain expansion subnetwork, multiple domains are
progressively generated in order to simulate various photometric and geometric
transforms in unseen domains. A series of strategies are introduced to
guarantee the safety and effectiveness of the expanded domains. For the domain
invariant representation learning subnetwork, contrastive learning is
introduced to learn the domain invariant representation in which each class is
well clustered so that a better decision boundary can be learned to improve
it's generalization. Extensive experiments on classification and segmentation
have shown that PDEN can achieve up to 15.28% improvement compared with the
state-of-the-art single-domain generalization methods.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Domain Generalization for Domain-Linked Classes [8.738092015092207]
In the real-world, classes may often be domain-linked, i.e. expressed only in a specific domain.
We propose a Fair and cONtrastive feature-space regularization algorithm for Domain-linked DG, FOND.
arXiv Detail & Related papers (2023-06-01T16:39:50Z) - Single Domain Dynamic Generalization for Iris Presentation Attack
Detection [41.126916126040655]
Iris presentation generalization has achieved great success under intra-domain settings but easily degrades on unseen domains.
We propose a Single Domain Dynamic Generalization (SDDG) framework, which exploits domain-invariant and domain-specific features on a per-sample basis.
The proposed method is effective and outperforms the state-of-the-art on LivDet-Iris 2017 dataset.
arXiv Detail & Related papers (2023-05-22T07:54:13Z) - Domain-General Crowd Counting in Unseen Scenarios [25.171343652312974]
Domain shift across crowd data severely hinders crowd counting models to generalize to unseen scenarios.
We introduce a dynamic sub-domain division scheme which divides the source domain into multiple sub-domains.
In order to disentangle domain-invariant information from domain-specific information in image features, we design the domain-invariant and -specific crowd memory modules.
arXiv Detail & Related papers (2022-12-05T19:52:28Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Learning to Balance Specificity and Invariance for In and Out of Domain
Generalization [27.338573739304604]
We introduce Domain-specific Masks for Generalization, a model for improving both in-domain and out-of-domain generalization performance.
For domain generalization, the goal is to learn from a set of source domains to produce a single model that will best generalize to an unseen target domain.
We demonstrate competitive performance compared to naive baselines and state-of-the-art methods on both PACS and DomainNet.
arXiv Detail & Related papers (2020-08-28T20:39:51Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.