Set Valued Predictions For Robust Domain Generalization
- URL: http://arxiv.org/abs/2507.03146v1
- Date: Thu, 03 Jul 2025 19:57:09 GMT
- Title: Set Valued Predictions For Robust Domain Generalization
- Authors: Ron Tsibulsky, Daniel Nevo, Uri Shalit,
- Abstract summary: We argue that set-valued predictors could be leveraged to enhance robustness across unseen domains.<n>We introduce a theoretical framework defining successful set prediction in the Domain Generalization setting.<n>We propose a practical optimization method compatible with modern learning architectures, that balances robust performance on unseen domains with small prediction set sizes.
- Score: 10.517157722122452
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the impressive advancements in modern machine learning, achieving robustness in Domain Generalization (DG) tasks remains a significant challenge. In DG, models are expected to perform well on samples from unseen test distributions (also called domains), by learning from multiple related training distributions. Most existing approaches to this problem rely on single-valued predictions, which inherently limit their robustness. We argue that set-valued predictors could be leveraged to enhance robustness across unseen domains, while also taking into account that these sets should be as small as possible. We introduce a theoretical framework defining successful set prediction in the DG setting, focusing on meeting a predefined performance criterion across as many domains as possible, and provide theoretical insights into the conditions under which such domain generalization is achievable. We further propose a practical optimization method compatible with modern learning architectures, that balances robust performance on unseen domains with small prediction set sizes. We evaluate our approach on several real-world datasets from the WILDS benchmark, demonstrating its potential as a promising direction for robust domain generalization.
Related papers
- A Unified Analysis of Generalization and Sample Complexity for Semi-Supervised Domain Adaptation [1.9567015559455132]
Domain adaptation seeks to leverage the abundant label information in a source domain to improve classification performance in a target domain with limited labels.<n>Most existing theoretical analyses focus on simplified settings where the source and target domains share the same input space.<n>We present a comprehensive theoretical study of domain adaptation algorithms based on domain alignment.
arXiv Detail & Related papers (2025-07-30T12:53:08Z) - TAROT: Towards Essentially Domain-Invariant Robustness with Theoretical Justification [2.3718468096686802]
TAROT is designed to enhance both domain adaptability and robustness.<n>It achieves superior performance on the challenging DomainNet dataset.<n>Results highlight the broader applicability of our approach in real-world domain adaptation scenarios.
arXiv Detail & Related papers (2025-05-10T09:43:04Z) - Partial Transportability for Domain Generalization [56.37032680901525]
Building on the theory of partial identification and transportability, this paper introduces new results for bounding the value of a functional of the target distribution.<n>Our contribution is to provide the first general estimation technique for transportability problems.<n>We propose a gradient-based optimization scheme for making scalable inferences in practice.
arXiv Detail & Related papers (2025-03-30T22:06:37Z) - Moderately Distributional Exploration for Domain Generalization [32.57429594854056]
We show that MODE can endow models with provable generalization performance on unknown target domains.
experimental results show that MODE achieves competitive performance compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-27T06:50:15Z) - Probable Domain Generalization via Quantile Risk Minimization [90.15831047587302]
Domain generalization seeks predictors which perform well on unseen test distributions.
We propose a new probabilistic framework for DG where the goal is to learn predictors that perform well with high probability.
arXiv Detail & Related papers (2022-07-20T14:41:09Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Towards Trustworthy Predictions from Deep Neural Networks with Fast
Adversarial Calibration [2.8935588665357077]
We propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift.
We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions.
arXiv Detail & Related papers (2020-12-20T13:39:29Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.