Failure Modes of Domain Generalization Algorithms
- URL: http://arxiv.org/abs/2111.13733v1
- Date: Fri, 26 Nov 2021 20:04:19 GMT
- Title: Failure Modes of Domain Generalization Algorithms
- Authors: Tigran Galstyan, Hrayr Harutyunyan, Hrant Khachatrian, Greg Ver Steeg,
Aram Galstyan
- Abstract summary: We propose an evaluation framework for domain generalization algorithms.
We show that the largest contributor to the generalization error varies across methods, datasets, regularization strengths and even training lengths.
- Score: 29.772370301145543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization algorithms use training data from multiple domains to
learn models that generalize well to unseen domains. While recently proposed
benchmarks demonstrate that most of the existing algorithms do not outperform
simple baselines, the established evaluation methods fail to expose the impact
of various factors that contribute to the poor performance. In this paper we
propose an evaluation framework for domain generalization algorithms that
allows decomposition of the error into components capturing distinct aspects of
generalization. Inspired by the prevalence of algorithms based on the idea of
domain-invariant representation learning, we extend the evaluation framework to
capture various types of failures in achieving invariance. We show that the
largest contributor to the generalization error varies across methods,
datasets, regularization strengths and even training lengths. We observe two
problems associated with the strategy of learning domain-invariant
representations. On Colored MNIST, most domain generalization algorithms fail
because they reach domain-invariance only on the training domains. On
Camelyon-17, domain-invariance degrades the quality of representations on
unseen domains. We hypothesize that focusing instead on tuning the classifier
on top of a rich representation can be a promising direction.
Related papers
- Domain Adversarial Active Learning for Domain Generalization
Classification [8.003401798449337]
Domain generalization models aim to learn cross-domain knowledge from source domain data, to improve performance on unknown target domains.
Recent research has demonstrated that diverse and rich source domain samples can enhance domain generalization capability.
We propose a domain-adversarial active learning (DAAL) algorithm for classification tasks in domain generalization.
arXiv Detail & Related papers (2024-03-10T10:59:22Z) - Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Randomized Adversarial Style Perturbations for Domain Generalization [49.888364462991234]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP)
The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains.
We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
arXiv Detail & Related papers (2023-04-04T17:07:06Z) - Domain-aware Triplet loss in Domain Generalization [0.0]
Domain shift is caused by discrepancies in the distributions of the testing and training data.
We design a domainaware triplet loss for domain generalization to help the model to cluster similar semantic features.
Our algorithm is designed to disperse domain information in the embedding space.
arXiv Detail & Related papers (2023-03-01T14:02:01Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z) - Representation via Representations: Domain Generalization via
Adversarially Learned Invariant Representations [14.751829773340537]
We examine adversarial censoring techniques for learning invariant representations from multiple "studies" (or domains)
In many contexts, such as medical forecasting, domain generalization from studies in populous areas provides fairness of a different flavor, not anticipated in previous work on algorithmic fairness.
arXiv Detail & Related papers (2020-06-20T02:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.