When Invariant Representation Learning Meets Label Shift: Insufficiency and Theoretical Insights
- URL: http://arxiv.org/abs/2406.16608v1
- Date: Mon, 24 Jun 2024 12:47:21 GMT
- Title: When Invariant Representation Learning Meets Label Shift: Insufficiency and Theoretical Insights
- Authors: You-Wei Luo, Chuan-Xian Ren,
- Abstract summary: Generalized label shift (GLS) is the latest developed one which shows great potential to deal with the complex factors within the shift.
Main results show the insufficiency of invariant representation learning, and prove the sufficiency and necessity of GLS correction for generalization.
We propose a kernel embedding-based correction algorithm (KECA) to minimize the generalization error and achieve successful knowledge transfer.
- Score: 16.72787996847537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a crucial step toward real-world learning scenarios with changing environments, dataset shift theory and invariant representation learning algorithm have been extensively studied to relax the identical distribution assumption in classical learning setting. Among the different assumptions on the essential of shifting distributions, generalized label shift (GLS) is the latest developed one which shows great potential to deal with the complex factors within the shift. In this paper, we aim to explore the limitations of current dataset shift theory and algorithm, and further provide new insights by presenting a comprehensive understanding of GLS. From theoretical aspect, two informative generalization bounds are derived, and the GLS learner is proved to be sufficiently close to optimal target model from the Bayesian perspective. The main results show the insufficiency of invariant representation learning, and prove the sufficiency and necessity of GLS correction for generalization, which provide theoretical supports and innovations for exploring generalizable model under dataset shift. From methodological aspect, we provide a unified view of existing shift correction frameworks, and propose a kernel embedding-based correction algorithm (KECA) to minimize the generalization error and achieve successful knowledge transfer. Both theoretical results and extensive experiment evaluations demonstrate the sufficiency and necessity of GLS correction for addressing dataset shift and the superiority of proposed algorithm.
Related papers
- Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization [34.036655200677664]
This paper focuses on a distribution shift setting where train and test distributions can be related by classes of (data) transformation maps.
We establish learning rules and algorithmic reductions to Empirical Risk Minimization (ERM)
We highlight that the learning rules we derive offer a game-theoretic viewpoint on distribution shift.
arXiv Detail & Related papers (2024-10-30T20:59:57Z) - On the Generalization Ability of Unsupervised Pretraining [53.06175754026037]
Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization.
This paper introduces a novel theoretical framework that illuminates the critical factor influencing the transferability of knowledge acquired during unsupervised pre-training to the subsequent fine-tuning phase.
Our results contribute to a better understanding of unsupervised pre-training and fine-tuning paradigm, and can shed light on the design of more effective pre-training algorithms.
arXiv Detail & Related papers (2024-03-11T16:23:42Z) - On the Generalization Capability of Temporal Graph Learning Algorithms:
Theoretical Insights and a Simpler Method [59.52204415829695]
Temporal Graph Learning (TGL) has become a prevalent technique across diverse real-world applications.
This paper investigates the generalization ability of different TGL algorithms.
We propose a simplified TGL network, which enjoys a small generalization error, improved overall performance, and lower model complexity.
arXiv Detail & Related papers (2024-02-26T08:22:22Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Hypothesis Transfer Learning with Surrogate Classification Losses:
Generalization Bounds through Algorithmic Stability [3.908842679355255]
Hypothesis transfer learning (HTL) contrasts domain adaptation by allowing for a previous task leverage, named the source, into a new one, the target.
This paper studies the learning theory of HTL through algorithmic stability, an attractive theoretical framework for machine learning algorithms analysis.
arXiv Detail & Related papers (2023-05-31T09:38:21Z) - Revisiting Deep Semi-supervised Learning: An Empirical Distribution
Alignment Framework and Its Generalization Bound [97.93945601881407]
We propose a new deep semi-supervised learning framework called Semi-supervised Learning by Empirical Distribution Alignment (SLEDA)
We show the generalization error of semi-supervised learning can be effectively bounded by minimizing the training error on labeled data.
Building upon our new framework and the theoretical bound, we develop a simple and effective deep semi-supervised learning method called Augmented Distribution Alignment Network (ADA-Net)
arXiv Detail & Related papers (2022-03-13T11:59:52Z) - Generalized Label Shift Correction via Minimum Uncertainty Principle:
Theory and Algorithm [20.361516866096007]
Generalized Label Shift provides an insight into the learning and transfer of desirable knowledge.
We propose a conditional adaptation framework to deal with these challenges.
The results of extensive experiments demonstrate that the proposed model achieves competitive performance.
arXiv Detail & Related papers (2022-02-26T02:39:47Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - On the benefits of representation regularization in invariance based
domain generalization [6.197602794925773]
Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments.
In this paper, we reveal that merely learning invariant representation is vulnerable to the unseen environment.
Our analysis further inspires an efficient regularization method to improve the robustness in domain generalization.
arXiv Detail & Related papers (2021-05-30T13:13:55Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.