Active Online Learning with Hidden Shifting Domains
- URL: http://arxiv.org/abs/2006.14481v2
- Date: Fri, 26 Feb 2021 03:40:42 GMT
- Title: Active Online Learning with Hidden Shifting Domains
- Authors: Yining Chen, Haipeng Luo, Tengyu Ma, Chicheng Zhang
- Abstract summary: We propose a surprisingly simple algorithm that adaptively balances its regret and its number of label queries.
Our algorithm can adaptively deal with interleaving spans of inputs from different domains.
- Score: 64.75186088512034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online machine learning systems need to adapt to domain shifts. Meanwhile,
acquiring label at every timestep is expensive. We propose a surprisingly
simple algorithm that adaptively balances its regret and its number of label
queries in settings where the data streams are from a mixture of hidden
domains. For online linear regression with oblivious adversaries, we provide a
tight tradeoff that depends on the durations and dimensionalities of the hidden
domains. Our algorithm can adaptively deal with interleaving spans of inputs
from different domains. We also generalize our results to non-linear regression
for hypothesis classes with bounded eluder dimension and adaptive adversaries.
Experiments on synthetic and realistic datasets demonstrate that our algorithm
achieves lower regret than uniform queries and greedy queries with equal
labeling budget.
Related papers
- Oracle Efficient Algorithms for Groupwise Regret [7.840453701379554]
We show that a simple modification of the sleeping experts technique of [Blum & Lykouris] yields an efficient reduction to the well-understood problem of diminishing external regret absent group considerations.
We find that uniformly across groups, our algorithm gives substantial error improvements compared to running a standard online linear regression algorithm with no groupwise regret guarantees.
arXiv Detail & Related papers (2023-10-07T02:17:22Z) - Online Label Shift: Optimal Dynamic Regret meets Practical Algorithms [33.61487362513345]
This paper focuses on supervised and unsupervised online label shift, where the class marginals $Q(y)$ varies but the class-conditionals $Q(x|y)$ remain invariant.
In the unsupervised setting, our goal is to adapt a learner, trained on some offline labeled data, to changing label distributions given unlabeled online data.
We develop novel algorithms that reduce the adaptation problem to online regression and guarantee optimal dynamic regret without any prior knowledge of the extent of drift in the label distribution.
arXiv Detail & Related papers (2023-05-31T05:39:52Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Deep Policies for Online Bipartite Matching: A Reinforcement Learning
Approach [5.683591363967851]
We present an end-to-end Reinforcement Learning framework for deriving better matching policies based on trial-and-error on historical data.
We show that most of the learning approaches perform significantly better than classical greedy algorithms on four synthetic and real-world datasets.
arXiv Detail & Related papers (2021-09-21T18:04:19Z) - Online Continual Adaptation with Active Self-Training [69.5815645379945]
We propose an online setting where the learner aims to continually adapt to changing distributions using both unlabeled samples and active queries of limited labels.
Online Self-Adaptive Mirror Descent (OSAMD) adopts an online teacher-student structure to enable online self-training from unlabeled data.
We show that OSAMD achieves favorable regrets under changing environments with limited labels on both simulated and real-world data.
arXiv Detail & Related papers (2021-06-11T17:51:25Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Flexible deep transfer learning by separate feature embeddings and
manifold alignment [0.0]
Object recognition is a key enabler across industry and defense.
Unfortunately, algorithms trained on existing labeled datasets do not directly generalize to new data because the data distributions do not match.
We propose a novel deep learning framework that overcomes this limitation by learning separate feature extractions for each domain.
arXiv Detail & Related papers (2020-12-22T19:24:44Z) - i-Mix: A Domain-Agnostic Strategy for Contrastive Representation
Learning [117.63815437385321]
We propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning.
In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains.
arXiv Detail & Related papers (2020-10-17T23:32:26Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.