An Equivalence Between Private Classification and Online Prediction
- URL: http://arxiv.org/abs/2003.00563v3
- Date: Tue, 22 Jun 2021 08:52:13 GMT
- Title: An Equivalence Between Private Classification and Online Prediction
- Authors: Mark Bun and Roi Livni and Shay Moran
- Abstract summary: We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm.
We introduce a new notion of stability called "global stability" which is essential to our proof and may be of independent interest.
- Score: 43.37646702577754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We prove that every concept class with finite Littlestone dimension can be
learned by an (approximate) differentially-private algorithm. This answers an
open question of Alon et al. (STOC 2019) who proved the converse statement
(this question was also asked by Neel et al.~(FOCS 2019)). Together these two
results yield an equivalence between online learnability and private PAC
learnability.
We introduce a new notion of algorithmic stability called "global stability"
which is essential to our proof and may be of independent interest. We also
discuss an application of our results to boosting the privacy and accuracy
parameters of differentially-private learners.
Related papers
- Characterizing Online and Private Learnability under Distributional Constraints via Generalized Smoothness [63.833913892018536]
We study sequential decision making under distributional adversaries that can adaptively choose data-generating distributions from a fixed family $U$.<n>We provide a near complete characterization of families $U$ that admit learnability in terms of a notion known as generalized smoothness.<n>We show that the generalized smoothness also characterizes private learnability under distributional constraints.
arXiv Detail & Related papers (2026-02-24T06:15:59Z) - Learning Conditional Averages [52.361762722359366]
We introduce the problem of learning conditional averages in the PAC framework.<n>Instead of learning the target concept itself, the goal is to predict, for each instance, the average label over its neighborhood.<n>More generally, it extends PAC learning to a setting that captures learning tasks arising in several domains.
arXiv Detail & Related papers (2026-02-12T13:20:29Z) - Privately Learning Decision Lists and a Differentially Private Winnow [6.7839141352275645]
We give new differentially private algorithms for the classic problems of learning decision lists and large-margin halfspaces.<n>In the PAC model, we give a computationally efficient algorithm for learning decision lists with minimal sample overhead over the best non-private algorithms.<n>In the online model, we give a private analog of the influential Winnow algorithm for learning halfspaces with mistake bound polylogarithmic in the analog dimension and inverse in the margin.
arXiv Detail & Related papers (2026-02-07T05:30:09Z) - Learning across Data Owners with Joint Differential Privacy [13.531808240117645]
We study the setting in which data owners train machine learning models collaboratively under a privacy notion called joint differential privacy.
In this setting, the model trained for each data owner $j$ uses $j$'s data without privacy consideration and other owners' data with differential privacy guarantees.
We present an algorithm that is a variant of DP-SGD and provides theoretical bounds on its population loss.
arXiv Detail & Related papers (2023-05-25T05:11:40Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - Learning versus Refutation in Noninteractive Local Differential Privacy [133.80204506727526]
We study two basic statistical tasks in non-interactive local differential privacy (LDP): learning and refutation.
Our main result is a complete characterization of the sample complexity of PAC learning for non-interactive LDP protocols.
arXiv Detail & Related papers (2022-10-26T03:19:24Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z) - Sample-efficient proper PAC learning with approximate differential
privacy [51.09425023771246]
We prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $tilde O(d6)$, ignoring privacy and accuracy parameters.
This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of $2O(d)$ on the sample complexity.
arXiv Detail & Related papers (2020-12-07T18:17:46Z) - Tighter Generalization Bounds for Iterative Differentially Private
Learning Algorithms [95.73230376153872]
This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps.
We prove that $(varepsilon, delta)$-differential privacy implies an on-average generalization bound for multi-Database learning algorithms.
We then investigate how the iterative nature shared by most learning algorithms influence privacy preservation and further generalization.
arXiv Detail & Related papers (2020-07-18T09:12:03Z) - A Computational Separation between Private Learning and Online Learning [19.001036556917818]
A concept class is privately learnable if and only if it is online learnable.
This equivalence incurs significant losses in both sample and computational efficiency.
We show that, assuming the existence of one-way functions, such an efficient conversion is impossible for pureprivate learners with sample complexity.
arXiv Detail & Related papers (2020-07-11T02:41:54Z) - On the Equivalence between Online and Private Learnability beyond Binary
Classification [26.400891660337777]
We show that private learnability implies online learnability in both settings.
We show that while online learnability continues to imply private learnability in multi-class classification, current proof techniques encounter significant hurdles in the regression setting.
arXiv Detail & Related papers (2020-06-02T23:30:41Z) - Privately Learning Markov Random Fields [44.95321417724914]
We consider the problem of learning Random Fields (including the Ising model) under the constraint of differential privacy.
We provide algorithms and lower bounds for both problems under a variety of privacy constraints.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.