Online Feature Updates Improve Online (Generalized) Label Shift
Adaptation
- URL: http://arxiv.org/abs/2402.03545v1
- Date: Mon, 5 Feb 2024 22:03:25 GMT
- Title: Online Feature Updates Improve Online (Generalized) Label Shift
Adaptation
- Authors: Ruihan Wu and Siddhartha Datta and Yi Su and Dheeraj Baby and Yu-Xiang
Wang and Kilian Q. Weinberger
- Abstract summary: This paper addresses the prevalent issue of label shift in an online setting with missing labels.
We explore the untapped potential of enhancing feature representations using unlabeled data at test-time.
Our method, Online Label Shift adaptation with Online Feature Updates (OLS-OFU), leverages self-supervised learning to refine the feature extraction process.
- Score: 54.3888105557787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the prevalent issue of label shift in an online setting
with missing labels, where data distributions change over time and obtaining
timely labels is challenging. While existing methods primarily focus on
adjusting or updating the final layer of a pre-trained classifier, we explore
the untapped potential of enhancing feature representations using unlabeled
data at test-time. Our novel method, Online Label Shift adaptation with Online
Feature Updates (OLS-OFU), leverages self-supervised learning to refine the
feature extraction process, thereby improving the prediction model. Theoretical
analyses confirm that OLS-OFU reduces algorithmic regret by capitalizing on
self-supervised learning for feature refinement. Empirical studies on various
datasets, under both online label shift and generalized label shift conditions,
underscore the effectiveness and robustness of OLS-OFU, especially in cases of
domain shifts.
Related papers
- Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation [16.657929958093824]
Test-time adaptation is an approach to adjust models to a new data distribution during inference.
Test-time batch normalization is a simple and popular method that achieved compelling performance on domain shift benchmarks.
We propose to tackle this challenge by only selectively adapting channels in a deep network, minimizing drastic adaptation that is sensitive to label shifts.
arXiv Detail & Related papers (2024-02-07T15:41:01Z) - Online Label Shift: Optimal Dynamic Regret meets Practical Algorithms [33.61487362513345]
This paper focuses on supervised and unsupervised online label shift, where the class marginals $Q(y)$ varies but the class-conditionals $Q(x|y)$ remain invariant.
In the unsupervised setting, our goal is to adapt a learner, trained on some offline labeled data, to changing label distributions given unlabeled online data.
We develop novel algorithms that reduce the adaptation problem to online regression and guarantee optimal dynamic regret without any prior knowledge of the extent of drift in the label distribution.
arXiv Detail & Related papers (2023-05-31T05:39:52Z) - Rethinking Precision of Pseudo Label: Test-Time Adaptation via
Complementary Learning [10.396596055773012]
We propose a novel complementary learning approach to enhance test-time adaptation.
In test-time adaptation tasks, information from the source domain is typically unavailable.
We highlight that the risk function of complementary labels agrees with their Vanilla loss formula.
arXiv Detail & Related papers (2023-01-15T03:36:33Z) - Adapting to Online Label Shift with Provable Guarantees [137.89382409682233]
We formulate and investigate the problem of online label shift.
The non-stationarity and lack of supervision make the problem challenging to be tackled.
Our algorithms enjoy optimal dynamic regret, indicating that performance is competitive with a clairvoyant nature.
arXiv Detail & Related papers (2022-07-05T15:43:14Z) - Online Adaptation to Label Distribution Shift [37.91472909652585]
We propose adaptation algorithms inspired by classical online learning techniques such as Follow The Leader (FTL) and Online Gradient Descent (OGD)
We empirically verify our findings under both simulated and real world label distribution shifts and show that OGD is particularly effective and robust to a variety of challenging label shift scenarios.
arXiv Detail & Related papers (2021-07-09T16:12:19Z) - Cycle Self-Training for Domain Adaptation [85.14659717421533]
Cycle Self-Training (CST) is a principled self-training algorithm that enforces pseudo-labels to generalize across domains.
CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail.
Empirical results indicate that CST significantly improves over prior state-of-the-arts in standard UDA benchmarks.
arXiv Detail & Related papers (2021-03-05T10:04:25Z) - Self-Tuning for Data-Efficient Deep Learning [75.34320911480008]
Self-Tuning is a novel approach to enable data-efficient deep learning.
It unifies the exploration of labeled and unlabeled data and the transfer of a pre-trained model.
It outperforms its SSL and TL counterparts on five tasks by sharp margins.
arXiv Detail & Related papers (2021-02-25T14:56:19Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.