On the Equivalence between Online and Private Learnability beyond Binary
Classification
- URL: http://arxiv.org/abs/2006.01980v3
- Date: Sat, 9 Oct 2021 03:24:30 GMT
- Title: On the Equivalence between Online and Private Learnability beyond Binary
Classification
- Authors: Young Hun Jung, Baekjin Kim, Ambuj Tewari
- Abstract summary: We show that private learnability implies online learnability in both settings.
We show that while online learnability continues to imply private learnability in multi-class classification, current proof techniques encounter significant hurdles in the regression setting.
- Score: 26.400891660337777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Alon et al. [2019] and Bun et al. [2020] recently showed that online
learnability and private PAC learnability are equivalent in binary
classification. We investigate whether this equivalence extends to multi-class
classification and regression. First, we show that private learnability implies
online learnability in both settings. Our extension involves studying a novel
variant of the Littlestone dimension that depends on a tolerance parameter and
on an appropriate generalization of the concept of threshold functions beyond
binary classification. Second, we show that while online learnability continues
to imply private learnability in multi-class classification, current proof
techniques encounter significant hurdles in the regression setting. While the
equivalence for regression remains open, we provide non-trivial sufficient
conditions for an online learnable class to also be privately learnable.
Related papers
- Optimal Learners for Realizable Regression: PAC Learning and Online Learning [52.37726841759983]
We aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting.
We first introduce a minimax instance optimal learner for realizable regression and propose a novel dimension that both qualitatively and quantitatively characterizes which classes of real-valued predictors are learnable.
In the context of online learning we provide a dimension that characterizes the minimax optimal instance optimal cumulative loss up to a constant factor and design an optimal online learner for realizable regression.
arXiv Detail & Related papers (2023-07-07T21:39:25Z) - A Combinatorial Characterization of Supervised Online Learnability [20.291598040396302]
We study the online learnability of hypothesis classes with respect to arbitrary, but bounded loss functions.
We give a new scale-sensitive dimension, named the sequential minimax dimension, and show that it gives a tight quantitative characterization of online learnability.
arXiv Detail & Related papers (2023-07-07T20:11:07Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Offline Reinforcement Learning: Fundamental Barriers for Value Function
Approximation [74.3002974673248]
We consider the offline reinforcement learning problem, where the aim is to learn a decision making policy from logged data.
offline RL is becoming increasingly relevant in practice, because online data collection is well suited to safety-critical domains.
Our results show that sample-efficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions that go beyond complexity learning.
arXiv Detail & Related papers (2021-11-21T23:22:37Z) - Realizable Learning is All You Need [21.34668631009594]
equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory.
We give the first model-independent framework explaining the equivalence of realizable and agnostic learnability.
arXiv Detail & Related papers (2021-11-08T19:00:00Z) - A Computational Separation between Private Learning and Online Learning [19.001036556917818]
A concept class is privately learnable if and only if it is online learnable.
This equivalence incurs significant losses in both sample and computational efficiency.
We show that, assuming the existence of one-way functions, such an efficient conversion is impossible for pureprivate learners with sample complexity.
arXiv Detail & Related papers (2020-07-11T02:41:54Z) - Bookworm continual learning: beyond zero-shot learning and continual
learning [52.95405249201296]
We propose a flexible setting where unseen classes can be inferred via a semantic model, and the visual model can be updated continually.
We also propose the bidirectional imagination (BImag) framework to address BCL where features of both past and future classes are generated.
arXiv Detail & Related papers (2020-06-26T19:07:18Z) - On Learnability under General Stochastic Processes [20.22409095000365]
Statistical learning under general non-iid processes is less mature.
We provide two natural notions of learnability of a function class under a general process.
Our results hold for both binary classification and regression.
arXiv Detail & Related papers (2020-05-15T15:49:23Z) - An Equivalence Between Private Classification and Online Prediction [43.37646702577754]
We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm.
We introduce a new notion of stability called "global stability" which is essential to our proof and may be of independent interest.
arXiv Detail & Related papers (2020-03-01T19:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.