Robust Online Learning
- URL: http://arxiv.org/abs/2602.06775v1
- Date: Fri, 06 Feb 2026 15:30:52 GMT
- Title: Robust Online Learning
- Authors: Sajad Ashkezari,
- Abstract summary: We study the problem of learning robust classifiers where the classifier will receive a perturbed input.<n>We consider both the realizable and learnability of hypothesis classes.<n>We show it controls the mistake bounds in the realizable setting and the regret bounds in the agnostic setting.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of learning robust classifiers where the classifier will receive a perturbed input. Unlike robust PAC learning studied in prior work, here the clean data and its label are also adversarially chosen. We formulate this setting as an online learning problem and consider both the realizable and agnostic learnability of hypothesis classes. We define a new dimension of classes and show it controls the mistake bounds in the realizable setting and the regret bounds in the agnostic setting. In contrast to the dimension that characterizes learnability in the PAC setting, our dimension is rather simple and resembles the Littlestone dimension. We generalize our dimension to multiclass hypothesis classes and prove similar results in the realizable case. Finally, we study the case where the learner does not know the set of allowed perturbations for each point and only has some prior on them.
Related papers
- Learning Conditional Averages [52.361762722359366]
We introduce the problem of learning conditional averages in the PAC framework.<n>Instead of learning the target concept itself, the goal is to predict, for each instance, the average label over its neighborhood.<n>More generally, it extends PAC learning to a setting that captures learning tasks arising in several domains.
arXiv Detail & Related papers (2026-02-12T13:20:29Z) - Partial Feedback Online Learning [88.27143767009376]
We study a new learning protocol, termed partial-feedback online learning.<n>Each instance admits a set of acceptable labels, but the learner observes only one acceptable label per round.
arXiv Detail & Related papers (2026-01-29T09:39:11Z) - Proper Learnability and the Role of Unlabeled Data [10.168670899305232]
We show that there are problems whose proper learnability is logically undecidable, i.e., independent of the ZFC axioms.<n>We then show all impossibility results which obstruct any characterization of proper learnability in the realizable PAC model.
arXiv Detail & Related papers (2025-02-14T18:41:53Z) - Probably Approximately Precision and Recall Learning [60.00180898830079]
A key challenge in machine learning is the prevalence of one-sided feedback.<n>We introduce a Probably Approximately Correct (PAC) framework in which hypotheses are set functions that map each input to a set of labels.<n>We develop new algorithms that learn from positive data alone, achieving optimal sample complexity in the realizable case.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Learnability, Sample Complexity, and Hypothesis Class Complexity for
Regression Models [10.66048003460524]
This work is inspired by the foundation of PAC and is motivated by the existing regression learning issues.
The proposed approach, denoted by epsilon-Confidence Approximately Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy)
It enables the learner to compare hypothesis classes of different complexity orders and choose among them the optimum with the minimum epsilon.
arXiv Detail & Related papers (2023-03-28T15:59:12Z) - A Characterization of List Learnability [15.368858716555888]
We consider list PAC learning where the goal is to output a list of $k$ predictions.
Generalizing the recent characterization of multiclass learnability, we show that a hypothesis class is $k$-list learnable if and only if the $k$-DS dimension is finite.
arXiv Detail & Related papers (2022-11-07T21:28:05Z) - Adversarially Robust PAC Learnability of Real-Valued Functions [19.54399554114989]
We show that classes of fat-shattering dimension are $ell_p$ learnable in $ell_p$ perturbation setting.
We introduce a novel agnostic agnostic sample scheme for real functions, which may be independent interest.
arXiv Detail & Related papers (2022-06-26T21:27:23Z) - Online Selective Classification with Limited Feedback [82.68009460301585]
We study selective classification in the online learning model, wherein a predictor may abstain from classifying an instance.
Two salient aspects of the setting we consider are that the data may be non-realisable, due to which abstention may be a valid long-term action.
We construct simple versioning-based schemes for any $mu in (0,1],$ that make most $Tmu$ mistakes while incurring smash$tildeO(T1-mu)$ excess abstention against adaptive adversaries.
arXiv Detail & Related papers (2021-10-27T08:00:53Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - Sample-efficient proper PAC learning with approximate differential
privacy [51.09425023771246]
We prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $tilde O(d6)$, ignoring privacy and accuracy parameters.
This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of $2O(d)$ on the sample complexity.
arXiv Detail & Related papers (2020-12-07T18:17:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.