An Equivalence Between Private Classification and Online Prediction
- URL: http://arxiv.org/abs/2003.00563v3
- Date: Tue, 22 Jun 2021 08:52:13 GMT
- Title: An Equivalence Between Private Classification and Online Prediction
- Authors: Mark Bun and Roi Livni and Shay Moran
- Abstract summary: We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm.
We introduce a new notion of stability called "global stability" which is essential to our proof and may be of independent interest.
- Score: 43.37646702577754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We prove that every concept class with finite Littlestone dimension can be
learned by an (approximate) differentially-private algorithm. This answers an
open question of Alon et al. (STOC 2019) who proved the converse statement
(this question was also asked by Neel et al.~(FOCS 2019)). Together these two
results yield an equivalence between online learnability and private PAC
learnability.
We introduce a new notion of algorithmic stability called "global stability"
which is essential to our proof and may be of independent interest. We also
discuss an application of our results to boosting the privacy and accuracy
parameters of differentially-private learners.
Related papers
- Learning across Data Owners with Joint Differential Privacy [13.531808240117645]
We study the setting in which data owners train machine learning models collaboratively under a privacy notion called joint differential privacy.
In this setting, the model trained for each data owner $j$ uses $j$'s data without privacy consideration and other owners' data with differential privacy guarantees.
We present an algorithm that is a variant of DP-SGD and provides theoretical bounds on its population loss.
arXiv Detail & Related papers (2023-05-25T05:11:40Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - Learning versus Refutation in Noninteractive Local Differential Privacy [133.80204506727526]
We study two basic statistical tasks in non-interactive local differential privacy (LDP): learning and refutation.
Our main result is a complete characterization of the sample complexity of PAC learning for non-interactive LDP protocols.
arXiv Detail & Related papers (2022-10-26T03:19:24Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z) - Sample-efficient proper PAC learning with approximate differential
privacy [51.09425023771246]
We prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $tilde O(d6)$, ignoring privacy and accuracy parameters.
This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of $2O(d)$ on the sample complexity.
arXiv Detail & Related papers (2020-12-07T18:17:46Z) - Tighter Generalization Bounds for Iterative Differentially Private
Learning Algorithms [95.73230376153872]
This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps.
We prove that $(varepsilon, delta)$-differential privacy implies an on-average generalization bound for multi-Database learning algorithms.
We then investigate how the iterative nature shared by most learning algorithms influence privacy preservation and further generalization.
arXiv Detail & Related papers (2020-07-18T09:12:03Z) - A Computational Separation between Private Learning and Online Learning [19.001036556917818]
A concept class is privately learnable if and only if it is online learnable.
This equivalence incurs significant losses in both sample and computational efficiency.
We show that, assuming the existence of one-way functions, such an efficient conversion is impossible for pureprivate learners with sample complexity.
arXiv Detail & Related papers (2020-07-11T02:41:54Z) - On the Equivalence between Online and Private Learnability beyond Binary
Classification [26.400891660337777]
We show that private learnability implies online learnability in both settings.
We show that while online learnability continues to imply private learnability in multi-class classification, current proof techniques encounter significant hurdles in the regression setting.
arXiv Detail & Related papers (2020-06-02T23:30:41Z) - Privately Learning Markov Random Fields [44.95321417724914]
We consider the problem of learning Random Fields (including the Ising model) under the constraint of differential privacy.
We provide algorithms and lower bounds for both problems under a variety of privacy constraints.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.