Differentially Private Online Bayesian Estimation With Adaptive
Truncation
- URL: http://arxiv.org/abs/2301.08202v1
- Date: Thu, 19 Jan 2023 17:53:53 GMT
- Title: Differentially Private Online Bayesian Estimation With Adaptive
Truncation
- Authors: Sinan Y{\i}ld{\i}r{\i}m
- Abstract summary: We propose a novel online and adaptive truncation method for differentially private Bayesian online estimation of a static parameter regarding a population.
We aim to design predictive queries with small sensitivity, hence small privacy-preserving noise, enabling more accurate estimation while maintaining the same level of privacy.
- Score: 1.14219428942199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel online and adaptive truncation method for differentially
private Bayesian online estimation of a static parameter regarding a
population. We assume that sensitive information from individuals is collected
sequentially and the inferential aim is to estimate, on-the-fly, a static
parameter regarding the population to which those individuals belong. We
propose sequential Monte Carlo to perform online Bayesian estimation. When
individuals provide sensitive information in response to a query, it is
necessary to perturb it with privacy-preserving noise to ensure the privacy of
those individuals. The amount of perturbation is proportional to the
sensitivity of the query, which is determined usually by the range of the
queried information. The truncation technique we propose adapts to the
previously collected observations to adjust the query range for the next
individual. The idea is that, based on previous observations, we can carefully
arrange the interval into which the next individual's information is to be
truncated before being perturbed with privacy-preserving noise. In this way, we
aim to design predictive queries with small sensitivity, hence small
privacy-preserving noise, enabling more accurate estimation while maintaining
the same level of privacy. To decide on the location and the width of the
interval, we use an exploration-exploitation approach a la Thompson sampling
with an objective function based on the Fisher information of the generated
observation. We show the merits of our methodology with numerical examples.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - Certification for Differentially Private Prediction in Gradient-Based Training [36.686002369773014]
We use convex relaxation and bound propagation to compute a provable upper-bound for the local and smooth sensitivity of a prediction.
This bound allows us to reduce the magnitude of noise added or improve privacy accounting in the private prediction setting.
arXiv Detail & Related papers (2024-06-19T10:47:00Z) - RASE: Efficient Privacy-preserving Data Aggregation against Disclosure Attacks for IoTs [2.1765174838950494]
We study the new paradigm for collecting and protecting the data produced by ever-increasing sensor devices.
Most previous studies on co-design of data aggregation and privacy preservation assume that a trusted fusion center adheres to privacy regimes.
We propose a novel paradigm (called RASE), which can be generalized into a 3-step sequential procedure, noise addition, followed by random permutation, and then parameter estimation.
arXiv Detail & Related papers (2024-05-31T15:21:38Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - A Unified Approach to Differentially Private Bayes Point Estimation [7.599399338954307]
emphdifferential privacy (DP) has been proposed, which enforces confidentiality by introducing randomization in the estimates.
Standard algorithms for differentially private estimation are based on adding an appropriate amount of noise to the output of a traditional point estimation method.
We propose a new Unified Bayes Private Point (UBaPP) approach to Bayes point estimation of the unknown parameters of a data generating mechanism under a DP constraint.
arXiv Detail & Related papers (2022-11-18T16:42:49Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Non-parametric Differentially Private Confidence Intervals for the
Median [3.205141100055992]
This paper proposes and evaluates several strategies to compute valid differentially private confidence intervals for the median.
We also illustrate that addressing both sources of uncertainty--the error from sampling and the error from protecting the output--should be preferred over simpler approaches that incorporate the uncertainty in a sequential fashion.
arXiv Detail & Related papers (2021-06-18T19:45:37Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.