Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization
- URL: http://arxiv.org/abs/2410.23461v1
- Date: Wed, 30 Oct 2024 20:59:57 GMT
- Title: Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization
- Authors: Omar Montasser, Han Shao, Emmanuel Abbe,
- Abstract summary: This paper focuses on a distribution shift setting where train and test distributions can be related by classes of (data) transformation maps.
We establish learning rules and algorithmic reductions to Empirical Risk Minimization (ERM)
We highlight that the learning rules we derive offer a game-theoretic viewpoint on distribution shift.
- Score: 34.036655200677664
- License:
- Abstract: Learning with identical train and test distributions has been extensively investigated both practically and theoretically. Much remains to be understood, however, in statistical learning under distribution shifts. This paper focuses on a distribution shift setting where train and test distributions can be related by classes of (data) transformation maps. We initiate a theoretical study for this framework, investigating learning scenarios where the target class of transformations is either known or unknown. We establish learning rules and algorithmic reductions to Empirical Risk Minimization (ERM), accompanied with learning guarantees. We obtain upper bounds on the sample complexity in terms of the VC dimension of the class composing predictors with transformations, which we show in many cases is not much larger than the VC dimension of the class of predictors. We highlight that the learning rules we derive offer a game-theoretic viewpoint on distribution shift: a learner searching for predictors and an adversary searching for transformation maps to respectively minimize and maximize the worst-case loss.
Related papers
- Understanding Transfer Learning via Mean-field Analysis [5.7150083558242075]
We consider two main transfer learning scenarios, $alpha$-ERM and fine-tuning with the KL-regularized empirical risk minimization.
We show the benefits of transfer learning with a one-hidden-layer neural network in the mean-field regime.
arXiv Detail & Related papers (2024-10-22T16:00:44Z) - Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing [55.791818510796645]
We aim to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data.
Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge.
We adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain.
arXiv Detail & Related papers (2024-10-08T12:26:48Z) - When Invariant Representation Learning Meets Label Shift: Insufficiency and Theoretical Insights [16.72787996847537]
Generalized label shift (GLS) is the latest developed one which shows great potential to deal with the complex factors within the shift.
Main results show the insufficiency of invariant representation learning, and prove the sufficiency and necessity of GLS correction for generalization.
We propose a kernel embedding-based correction algorithm (KECA) to minimize the generalization error and achieve successful knowledge transfer.
arXiv Detail & Related papers (2024-06-24T12:47:21Z) - A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment
for Imbalanced Learning [129.63326990812234]
We propose a technique named data-dependent contraction to capture how modified losses handle different classes.
On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment.
arXiv Detail & Related papers (2023-10-07T09:15:08Z) - Revisiting the Robustness of the Minimum Error Entropy Criterion: A
Transfer Learning Case Study [16.07380451502911]
This paper revisits the robustness of the minimum error entropy criterion to deal with non-Gaussian noises.
We investigate its feasibility and usefulness in real-life transfer learning regression tasks, where distributional shifts are common.
arXiv Detail & Related papers (2023-07-17T15:38:11Z) - GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples
using Gradients and Invariance Transformations [77.34726150561087]
We propose a holistic approach for the detection of generalization errors in deep neural networks.
GIT combines the usage of gradient information and invariance transformations.
Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures.
arXiv Detail & Related papers (2023-07-05T22:04:38Z) - Hypothesis Transfer Learning with Surrogate Classification Losses:
Generalization Bounds through Algorithmic Stability [3.908842679355255]
Hypothesis transfer learning (HTL) contrasts domain adaptation by allowing for a previous task leverage, named the source, into a new one, the target.
This paper studies the learning theory of HTL through algorithmic stability, an attractive theoretical framework for machine learning algorithms analysis.
arXiv Detail & Related papers (2023-05-31T09:38:21Z) - An Information-theoretical Approach to Semi-supervised Learning under
Covariate-shift [24.061858945664856]
A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution.
We propose an approach for semi-supervised learning algorithms that is capable of addressing this issue.
Our framework also recovers some popular methods, including entropy minimization and pseudo-labeling.
arXiv Detail & Related papers (2022-02-24T14:25:14Z) - Causally-motivated Shortcut Removal Using Auxiliary Labels [63.686580185674195]
Key challenge to learning such risk-invariant predictors is shortcut learning.
We propose a flexible, causally-motivated approach to address this challenge.
We show both theoretically and empirically that this causally-motivated regularization scheme yields robust predictors.
arXiv Detail & Related papers (2021-05-13T16:58:45Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.