Constrained Maximum Cross-Domain Likelihood for Domain Generalization
- URL: http://arxiv.org/abs/2210.04155v1
- Date: Sun, 9 Oct 2022 03:41:02 GMT
- Title: Constrained Maximum Cross-Domain Likelihood for Domain Generalization
- Authors: Jianxin Lin, Yongqiang Tang, Junping Wang and Wensheng Zhang
- Abstract summary: Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
- Score: 14.91361835243516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a recent noticeable topic, domain generalization aims to learn a
generalizable model on multiple source domains, which is expected to perform
well on unseen test domains. Great efforts have been made to learn
domain-invariant features by aligning distributions across domains. However,
existing works are often designed based on some relaxed conditions which are
generally hard to satisfy and fail to realize the desired joint distribution
alignment. In this paper, we propose a novel domain generalization method,
which originates from an intuitive idea that a domain-invariant classifier can
be learned by minimizing the KL-divergence between posterior distributions from
different domains. To enhance the generalizability of the learned classifier,
we formalize the optimization objective as an expectation computed on the
ground-truth marginal distribution. Nevertheless, it also presents two obvious
deficiencies, one of which is the side-effect of entropy increase in
KL-divergence and the other is the unavailability of ground-truth marginal
distributions. For the former, we introduce a term named maximum in-domain
likelihood to maintain the discrimination of the learned domain-invariant
representation space. For the latter, we approximate the ground-truth marginal
distribution with source domains under a reasonable convex hull assumption.
Finally, a Constrained Maximum Cross-domain Likelihood (CMCL) optimization
problem is deduced, by solving which the joint distributions are naturally
aligned. An alternating optimization strategy is carefully designed to
approximately solve this optimization problem. Extensive experiments on four
standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and
miniDomainNet, highlight the superior performance of our method.
Related papers
- Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Class-conditioned Domain Generalization via Wasserstein Distributional
Robust Optimization [12.10885662305154]
Given multiple source domains, domain generalization aims at learning a universal model that performs well on any unseen but related target domain.
Existing approaches are not sufficiently robust when the variation of conditional distributions given the same class is large.
We extend the concept of distributional robust optimization to solve the class-conditional domain generalization problem.
arXiv Detail & Related papers (2021-09-08T14:23:03Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.