Multiclass versus Binary Differentially Private PAC Learning
- URL: http://arxiv.org/abs/2107.10870v1
- Date: Thu, 22 Jul 2021 18:06:39 GMT
- Title: Multiclass versus Binary Differentially Private PAC Learning
- Authors: Mark Bun, Marco Gaboardi, Satchit Sivakumar
- Abstract summary: We show a generic reduction from multiclass differentially private PAC learning to binary private PAC learning.
Our proof extends the notion of $Psi$-dimension defined in work of Ben-David et al. [JCSS '95] to the online setting and explores its general properties.
- Score: 32.22526322514124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show a generic reduction from multiclass differentially private PAC
learning to binary private PAC learning. We apply this transformation to a
recently proposed binary private PAC learner to obtain a private multiclass
learner with sample complexity that has a polynomial dependence on the
multiclass Littlestone dimension and a poly-logarithmic dependence on the
number of classes. This yields an exponential improvement in the dependence on
both parameters over learners from previous work. Our proof extends the notion
of $\Psi$-dimension defined in work of Ben-David et al. [JCSS '95] to the
online setting and explores its general properties.
Related papers
- Private PAC Learning May be Harder than Online Learning [14.180842831990999]
We show that any concept class of Littlestone dimension $d$ can be privately PAC learned using $mathrmpoly(d)$ samples.
This raises the natural question of whether there might be a generic conversion from online learners to private PAC learners.
arXiv Detail & Related papers (2024-02-16T22:44:52Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - Differentially-Private Bayes Consistency [70.92545332158217]
We construct a Bayes consistent learning rule that satisfies differential privacy (DP)
We prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal sample complexity.
arXiv Detail & Related papers (2022-12-08T11:57:30Z) - Differentially Private Multivariate Time Series Forecasting of
Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation? [14.66445694852729]
This paper investigates the problem of forecasting multivariate aggregated human mobility while preserving the privacy of the individuals concerned.
Differential privacy, a state-of-the-art formal notion, has been used as the privacy guarantee in two different and independent steps when training deep learning models.
As shown in the results, differentially private deep learning models trained under gradient or input perturbation achieve nearly the same performance as non-private deep learning models.
arXiv Detail & Related papers (2022-05-01T10:11:04Z) - A Characterization of Multiclass Learnability [18.38631912121182]
We characterize multiclass PAC learnability through the DS dimension, a dimension defined by Daniely and Shalev-Shwartz 2014.
In the list learning setting, instead of predicting a single outcome for a given unseen input, the goal is to provide a short menu of predictions.
Our second main result concerns the Natarajan dimension, which has been a central candidate for characterizing multiclass learnability.
arXiv Detail & Related papers (2022-03-03T07:41:54Z) - Sample-efficient proper PAC learning with approximate differential
privacy [51.09425023771246]
We prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $tilde O(d6)$, ignoring privacy and accuracy parameters.
This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of $2O(d)$ on the sample complexity.
arXiv Detail & Related papers (2020-12-07T18:17:46Z) - Tighter Generalization Bounds for Iterative Differentially Private
Learning Algorithms [95.73230376153872]
This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps.
We prove that $(varepsilon, delta)$-differential privacy implies an on-average generalization bound for multi-Database learning algorithms.
We then investigate how the iterative nature shared by most learning algorithms influence privacy preservation and further generalization.
arXiv Detail & Related papers (2020-07-18T09:12:03Z) - A Computational Separation between Private Learning and Online Learning [19.001036556917818]
A concept class is privately learnable if and only if it is online learnable.
This equivalence incurs significant losses in both sample and computational efficiency.
We show that, assuming the existence of one-way functions, such an efficient conversion is impossible for pureprivate learners with sample complexity.
arXiv Detail & Related papers (2020-07-11T02:41:54Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z) - Efficient, Noise-Tolerant, and Private Learning via Boosting [15.62988331732388]
We show how to construct noise-tolerant and private PAC learners for large-margin halfspaces.
This first bound illustrates a general methodology for obtaining PAC learners from privacy.
The second bound uses standard techniques to match the best known sample complexity for differentially private learning of large-margin halfspaces.
arXiv Detail & Related papers (2020-02-04T03:16:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.