Domain Generalisation via Imprecise Learning
- URL: http://arxiv.org/abs/2404.04669v2
- Date: Thu, 30 May 2024 07:11:03 GMT
- Title: Domain Generalisation via Imprecise Learning
- Authors: Anurag Singh, Siu Lun Chau, Shahine Bouabid, Krikamol Muandet,
- Abstract summary: Out-of-distribution generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation.
We introduce the Imprecise Domain Generalisation framework, featuring an imprecise risk optimisation that allows learners to stay imprecise.
Supported by both theoretical and empirical evidence, our work showcases the benefits of integrating imprecision into domain generalisation.
- Score: 11.327964663415306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation, e.g., optimising the average-case risk, worst-case risk, or interpolations thereof. While this choice should in principle be made by the model operator like medical doctors, this information might not always be available at training time. The institutional separation between machine learners and model operators leads to arbitrary commitments to specific generalisation strategies by machine learners due to these deployment uncertainties. We introduce the Imprecise Domain Generalisation framework to mitigate this, featuring an imprecise risk optimisation that allows learners to stay imprecise by optimising against a continuous spectrum of generalisation strategies during training, and a model framework that allows operators to specify their generalisation preference at deployment. Supported by both theoretical and empirical evidence, our work showcases the benefits of integrating imprecision into domain generalisation.
Related papers
- On the Generalization of Preference Learning with DPO [17.420727709895736]
Large language models (LLMs) have demonstrated remarkable capabilities but often struggle to align with human preferences.
Preference learning trains models to distinguish between preferred and non-preferred responses based on human feedback.
This paper introduces a new theoretical framework to analyze the generalization guarantees of models trained with direct preference optimization (DPO)
arXiv Detail & Related papers (2024-08-06T22:11:00Z) - Class-wise Generalization Error: an Information-Theoretic Analysis [22.877440350595222]
We study the class-generalization error, which quantifies the generalization performance of each individual class.
We empirically validate our proposed bounds in different neural networks and show that they accurately capture the complex class-generalization error behavior.
arXiv Detail & Related papers (2024-01-05T17:05:14Z) - Advocating for the Silent: Enhancing Federated Generalization for Non-Participating Clients [38.804196122833645]
This paper unveils an information-theoretic generalization framework for Federated Learning.
It quantifies generalization errors by evaluating the information entropy of local distributions.
Inspired by our deduced generalization bounds, we introduce a weighted aggregation approach and a duo of client selection strategies.
arXiv Detail & Related papers (2023-10-11T03:39:56Z) - Domain Generalization without Excess Empirical Risk [83.26052467843725]
A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty.
We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization.
We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk.
arXiv Detail & Related papers (2023-08-30T08:46:46Z) - On the benefits of representation regularization in invariance based
domain generalization [6.197602794925773]
Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments.
In this paper, we reveal that merely learning invariant representation is vulnerable to the unseen environment.
Our analysis further inspires an efficient regularization method to improve the robustness in domain generalization.
arXiv Detail & Related papers (2021-05-30T13:13:55Z) - Causally-motivated Shortcut Removal Using Auxiliary Labels [63.686580185674195]
Key challenge to learning such risk-invariant predictors is shortcut learning.
We propose a flexible, causally-motivated approach to address this challenge.
We show both theoretically and empirically that this causally-motivated regularization scheme yields robust predictors.
arXiv Detail & Related papers (2021-05-13T16:58:45Z) - An Online Learning Approach to Interpolation and Extrapolation in Domain
Generalization [53.592597682854944]
We recast generalization over sub-groups as an online game between a player minimizing risk and an adversary presenting new test.
We show that ERM is provably minimax-optimal for both tasks.
arXiv Detail & Related papers (2021-02-25T19:06:48Z) - In Search of Robust Measures of Generalization [79.75709926309703]
We develop bounds on generalization error, optimization error, and excess risk.
When evaluated empirically, most of these bounds are numerically vacuous.
We argue that generalization measures should instead be evaluated within the framework of distributional robustness.
arXiv Detail & Related papers (2020-10-22T17:54:25Z) - Dynamics Generalization via Information Bottleneck in Deep Reinforcement
Learning [90.93035276307239]
We propose an information theoretic regularization objective and an annealing-based optimization method to achieve better generalization ability in RL agents.
We demonstrate the extreme generalization benefits of our approach in different domains ranging from maze navigation to robotic tasks.
This work provides a principled way to improve generalization in RL by gradually removing information that is redundant for task-solving.
arXiv Detail & Related papers (2020-08-03T02:24:20Z) - Learning to Learn Single Domain Generalization [18.72451358284104]
We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem.
The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations.
To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint.
arXiv Detail & Related papers (2020-03-30T04:39:53Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.