Online Adaptation to Label Distribution Shift
- URL: http://arxiv.org/abs/2107.04520v1
- Date: Fri, 9 Jul 2021 16:12:19 GMT
- Title: Online Adaptation to Label Distribution Shift
- Authors: Ruihan Wu, Chuan Guo, Yi Su, Kilian Q. Weinberger
- Abstract summary: We propose adaptation algorithms inspired by classical online learning techniques such as Follow The Leader (FTL) and Online Gradient Descent (OGD)
We empirically verify our findings under both simulated and real world label distribution shifts and show that OGD is particularly effective and robust to a variety of challenging label shift scenarios.
- Score: 37.91472909652585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models often encounter distribution shifts when deployed in
the real world. In this paper, we focus on adaptation to label distribution
shift in the online setting, where the test-time label distribution is
continually changing and the model must dynamically adapt to it without
observing the true label. Leveraging a novel analysis, we show that the lack of
true label does not hinder estimation of the expected test loss, which enables
the reduction of online label shift adaptation to conventional online learning.
Informed by this observation, we propose adaptation algorithms inspired by
classical online learning techniques such as Follow The Leader (FTL) and Online
Gradient Descent (OGD) and derive their regret bounds. We empirically verify
our findings under both simulated and real world label distribution shifts and
show that OGD is particularly effective and robust to a variety of challenging
label shift scenarios.
Related papers
- Theory-inspired Label Shift Adaptation via Aligned Distribution Mixture [21.494268411607766]
We propose an innovative label shift framework named as Aligned Distribution Mixture (ADM)
Within this framework, we enhance four typical label shift methods by introducing modifications to the classifier training process.
Considering the distinctiveness of the proposed one-step approach, we develop an efficient bi-level optimization strategy.
arXiv Detail & Related papers (2024-11-04T12:51:57Z) - Online Feature Updates Improve Online (Generalized) Label Shift Adaptation [51.328801874640675]
Our novel method, Online Label Shift adaptation with Online Feature Updates (OLS-OFU), leverages self-supervised learning to refine the feature extraction process.
By carefully designing the algorithm, OLS-OFU maintains the similar online regret convergence to the results in the literature while taking the improved features into account.
arXiv Detail & Related papers (2024-02-05T22:03:25Z) - Label Shift Adapter for Test-Time Adaptation under Covariate and Label
Shifts [48.83127071288469]
Test-time adaptation (TTA) aims to adapt a pre-trained model to the target domain in a batch-by-batch manner during inference.
Most previous TTA approaches assume that both source and target domain datasets have balanced label distribution.
We propose a novel label shift adapter that can be incorporated into existing TTA approaches to deal with label shifts effectively.
arXiv Detail & Related papers (2023-08-17T06:37:37Z) - Online Label Shift: Optimal Dynamic Regret meets Practical Algorithms [33.61487362513345]
This paper focuses on supervised and unsupervised online label shift, where the class marginals $Q(y)$ varies but the class-conditionals $Q(x|y)$ remain invariant.
In the unsupervised setting, our goal is to adapt a learner, trained on some offline labeled data, to changing label distributions given unlabeled online data.
We develop novel algorithms that reduce the adaptation problem to online regression and guarantee optimal dynamic regret without any prior knowledge of the extent of drift in the label distribution.
arXiv Detail & Related papers (2023-05-31T05:39:52Z) - Adapting to Online Label Shift with Provable Guarantees [137.89382409682233]
We formulate and investigate the problem of online label shift.
The non-stationarity and lack of supervision make the problem challenging to be tackled.
Our algorithms enjoy optimal dynamic regret, indicating that performance is competitive with a clairvoyant nature.
arXiv Detail & Related papers (2022-07-05T15:43:14Z) - Online Continual Adaptation with Active Self-Training [69.5815645379945]
We propose an online setting where the learner aims to continually adapt to changing distributions using both unlabeled samples and active queries of limited labels.
Online Self-Adaptive Mirror Descent (OSAMD) adopts an online teacher-student structure to enable online self-training from unlabeled data.
We show that OSAMD achieves favorable regrets under changing environments with limited labels on both simulated and real-world data.
arXiv Detail & Related papers (2021-06-11T17:51:25Z) - Long-tail learning via logit adjustment [67.47668112425225]
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution.
This poses a challenge for generalisation on such labels, and also makes na"ive learning biased towards dominant labels.
We present two simple modifications of standard softmax cross-entropy training to cope with these challenges.
arXiv Detail & Related papers (2020-07-14T19:27:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.