Domain Generalization using Ensemble Learning
- URL: http://arxiv.org/abs/2103.10257v1
- Date: Thu, 18 Mar 2021 13:50:36 GMT
- Title: Domain Generalization using Ensemble Learning
- Authors: Yusuf Mesbah, Youssef Youssry Ibrahim, Adil Mehood Khan
- Abstract summary: We tackle the problem of a model's weak generalization when it is trained on a single source domain.
From this perspective, we build an ensemble model on top of base deep learning models trained on a single source to enhance the generalization of their collective prediction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization is a sub-field of transfer learning that aims at
bridging the gap between two different domains in the absence of any knowledge
about the target domain. Our approach tackles the problem of a model's weak
generalization when it is trained on a single source domain. From this
perspective, we build an ensemble model on top of base deep learning models
trained on a single source to enhance the generalization of their collective
prediction. The results achieved thus far have demonstrated promising
improvements of the ensemble over any of its base learners.
Related papers
- Domain Expansion and Boundary Growth for Open-Set Single-Source Domain Generalization [70.02187124865627]
Open-set single-source domain generalization aims to use a single-source domain to learn a robust model that can be generalized to unknown target domains.
We propose a novel learning approach based on domain expansion and boundary growth to expand the scarce source samples.
Our approach can achieve significant improvements and reach state-of-the-art performance on several cross-domain image classification datasets.
arXiv Detail & Related papers (2024-11-05T09:08:46Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Not to Overfit or Underfit? A Study of Domain Generalization in Question
Answering [18.22045610080848]
Machine learning models are prone to overfitting their source (training) distributions.
Here we examine the contrasting view that multi-source domain generalization (DG) is in fact a problem of mitigating source domain underfitting.
arXiv Detail & Related papers (2022-05-15T10:53:40Z) - An attention model for the formation of collectives in real-world
domains [78.1526027174326]
We consider the problem of forming collectives of agents for real-world applications aligned with Sustainable Development Goals.
We propose a general approach for the formation of collectives based on a novel combination of an attention model and an integer linear program.
arXiv Detail & Related papers (2022-04-30T09:15:36Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Domain Generalization by Mutual-Information Regularization with
Pre-trained Models [20.53534134966378]
Domain generalization (DG) aims to learn a generalized model to an unseen target domain using only limited source domains.
We re-formulate the DG objective using mutual information with the oracle model, a model generalized to any possible domain.
Our experiments show that Mutual Information Regularization with Oracle (MIRO) significantly improves the out-of-distribution performance.
arXiv Detail & Related papers (2022-03-21T08:07:46Z) - Self-balanced Learning For Domain Generalization [64.99791119112503]
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics.
Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class.
We propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data.
arXiv Detail & Related papers (2021-08-31T03:17:54Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z) - Learning to Learn Single Domain Generalization [18.72451358284104]
We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem.
The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations.
To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint.
arXiv Detail & Related papers (2020-03-30T04:39:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.