Domain Generalisation via Risk Distribution Matching
- URL: http://arxiv.org/abs/2310.18598v1
- Date: Sat, 28 Oct 2023 05:23:55 GMT
- Title: Domain Generalisation via Risk Distribution Matching
- Authors: Toan Nguyen, Kien Do, Bao Duong, Thin Nguyen
- Abstract summary: We propose a novel approach for domain generalisation (DG) leveraging risk distributions to characterise domains.
In testing, we may observe similar, or potentially intensifying in magnitude, divergences between risk distributions.
We show that Risk Distribution Matching (RDM) shows superior generalisation capability over state-of-the-art DG methods.
- Score: 17.334794920092577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel approach for domain generalisation (DG) leveraging risk
distributions to characterise domains, thereby achieving domain invariance. In
our findings, risk distributions effectively highlight differences between
training domains and reveal their inherent complexities. In testing, we may
observe similar, or potentially intensifying in magnitude, divergences between
risk distributions. Hence, we propose a compelling proposition: Minimising the
divergences between risk distributions across training domains leads to robust
invariance for DG. The key rationale behind this concept is that a model,
trained on domain-invariant or stable features, may consistently produce
similar risk distributions across various domains. Building upon this idea, we
propose Risk Distribution Matching (RDM). Using the maximum mean discrepancy
(MMD) distance, RDM aims to minimise the variance of risk distributions across
training domains. However, when the number of domains increases, the direct
optimisation of variance leads to linear growth in MMD computations, resulting
in inefficiency. Instead, we propose an approximation that requires only one
MMD computation, by aligning just two distributions: that of the worst-case
domain and the aggregated distribution from all domains. Notably, this method
empirically outperforms optimising distributional variance while being
computationally more efficient. Unlike conventional DG matching algorithms, RDM
stands out for its enhanced efficacy by concentrating on scalar risk
distributions, sidestepping the pitfalls of high-dimensional challenges seen in
feature or gradient matching. Our extensive experiments on standard benchmark
datasets demonstrate that RDM shows superior generalisation capability over
state-of-the-art DG methods.
Related papers
- Domain Agnostic Conditional Invariant Predictions for Domain Generalization [20.964740750976667]
We propose a Discriminant Risk Minimization (DRM) theory and the corresponding algorithm to capture the invariant features without domain labels.
In DRM theory, we prove that reducing the discrepancy of prediction distribution between overall source domain and any subset of it can contribute to obtaining invariant features.
We evaluate our algorithm against various domain generalization methods on multiple real-world datasets.
arXiv Detail & Related papers (2024-06-09T02:38:52Z) - Moderately Distributional Exploration for Domain Generalization [32.57429594854056]
We show that MODE can endow models with provable generalization performance on unknown target domains.
experimental results show that MODE achieves competitive performance compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-27T06:50:15Z) - Distributionally Robust Domain Adaptation [12.02023514105999]
Domain Adaptation (DA) has recently received significant attention due to its potential to adapt a learning model across source and target domains with mismatched distributions.
In this paper, we propose DRDA, a distributionally robust domain adaptation method.
arXiv Detail & Related papers (2022-10-30T17:29:22Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z) - Few-shot Domain Adaptation by Causal Mechanism Transfer [107.08605582020866]
We study few-shot supervised domain adaptation (DA) for regression problems, where only a few labeled target domain data and many labeled source domain data are available.
Many of the current DA methods base their transfer assumptions on either parametrized distribution shift or apparent distribution similarities.
We propose mechanism transfer, a meta-distributional scenario in which a data generating mechanism is invariant among domains.
arXiv Detail & Related papers (2020-02-10T02:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.