Differentially Private Naive Bayes Classifier using Smooth Sensitivity
- URL: http://arxiv.org/abs/2003.13955v2
- Date: Mon, 19 Jul 2021 17:31:22 GMT
- Title: Differentially Private Naive Bayes Classifier using Smooth Sensitivity
- Authors: Farzad Zafarani and Chris Clifton
- Abstract summary: We have provided a differentially private Naive Bayes classifier that adds noise proportional to the Smooth Sensitivity of its parameters.
Our experiment results on the real-world datasets show that the accuracy of our method has improved significantly while still preserving $varepsilon$-differential privacy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing collection of users' data, protecting individual privacy
has gained more interest. Differential Privacy is a strong concept of
protecting individuals. Naive Bayes is one of the popular machine learning
algorithm, used as a baseline for many tasks. In this work, we have provided a
differentially private Naive Bayes classifier that adds noise proportional to
the Smooth Sensitivity of its parameters. We have compared our result to
Vaidya, Shafiq, Basu, and Hong in which they have scaled the noise to the
global sensitivity of the parameters. Our experiment results on the real-world
datasets show that the accuracy of our method has improved significantly while
still preserving $\varepsilon$-differential privacy.
Related papers
- Differentially Private Online Bayesian Estimation With Adaptive
Truncation [1.14219428942199]
We propose a novel online and adaptive truncation method for differentially private Bayesian online estimation of a static parameter regarding a population.
We aim to design predictive queries with small sensitivity, hence small privacy-preserving noise, enabling more accurate estimation while maintaining the same level of privacy.
arXiv Detail & Related papers (2023-01-19T17:53:53Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Private Boosted Decision Trees via Smooth Re-Weighting [2.099922236065961]
Differential Privacy is the appropriate mathematical framework for formal guarantees of privacy.
We propose and test a practical algorithm for boosting decision trees that guarantees differential privacy.
arXiv Detail & Related papers (2022-01-29T20:08:52Z) - Not all noise is accounted equally: How differentially private learning
benefits from large sampling rates [0.0]
In differentially private SGD, the gradients computed at each training iteration are subject to two different types of noise.
In this study, we show that these two types of noise are equivalent in their effect on the utility of private neural networks.
We propose a training paradigm that shifts the proportions of noise towards less inherent and more additive noise.
arXiv Detail & Related papers (2021-10-12T18:11:31Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Privacy Amplification Via Bernoulli Sampling [24.23990103106668]
We analyze privacy amplification properties of a new operation, sampling from the posterior, that is used in Bayesian inference.
We provide an algorithm to compute the amplification factor in this setting, and establish upper and lower bounds on this factor.
arXiv Detail & Related papers (2021-05-21T22:34:32Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy-Preserving Boosting in the Local Setting [17.375582978294105]
In machine learning, boosting is one of the most popular methods that designed to combine multiple base learners to a superior one.
In the big data era, the data held by individual and entities, like personal images, browsing history and census information, are more likely to contain sensitive information.
Local Differential Privacy is proposed as an effective privacy protection approach, which offers a strong guarantee to the data owners.
arXiv Detail & Related papers (2020-02-06T04:48:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.