Differentially Private Algorithms for Statistical Verification of
Cyber-Physical Systems
- URL: http://arxiv.org/abs/2004.00275v2
- Date: Mon, 27 Jun 2022 23:01:35 GMT
- Title: Differentially Private Algorithms for Statistical Verification of
Cyber-Physical Systems
- Authors: Yu Wang, Hussein Sibai, Mark Yen, Sayan Mitra, Geir E. Dullerud
- Abstract summary: We show that revealing the number of the samples drawn can violate privacy.
We propose a new notion of differential privacy which we call expected differential privacy.
- Score: 5.987774571079633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical model checking is a class of sequential algorithms that can
verify specifications of interest on an ensemble of cyber-physical systems
(e.g., whether 99% of cars from a batch meet a requirement on their energy
efficiency). These algorithms infer the probability that given specifications
are satisfied by the systems with provable statistical guarantees by drawing
sufficient numbers of independent and identically distributed samples. During
the process of statistical model checking, the values of the samples (e.g., a
user's car energy efficiency) may be inferred by intruders, causing privacy
concerns in consumer-level applications (e.g., automobiles and medical
devices). This paper addresses the privacy of statistical model checking
algorithms from the point of view of differential privacy. These algorithms are
sequential, drawing samples until a condition on their values is met. We show
that revealing the number of the samples drawn can violate privacy. We also
show that the standard exponential mechanism that randomizes the output of an
algorithm to achieve differential privacy fails to do so in the context of
sequential algorithms. Instead, we relax the conservative requirement in
differential privacy that the sensitivity of the output of the algorithm should
be bounded to any perturbation for any data set. We propose a new notion of
differential privacy which we call expected differential privacy. Then, we
propose a novel expected sensitivity analysis for the sequential algorithm and
proposed a corresponding exponential mechanism that randomizes the termination
time to achieve the expected differential privacy. We apply the proposed
mechanism to statistical model checking algorithms to preserve the privacy of
the samples they draw. The utility of the proposed algorithm is demonstrated in
a case study.
Related papers
- CURATE: Scaling-up Differentially Private Causal Graph Discovery [8.471466670802817]
Differential Privacy (DP) has been adopted to ensure user privacy in Causal Graph Discovery (CGD)
We present CURATE, a DP-CGD framework with adaptive privacy budgeting.
We show that CURATE achieves higher utility compared to existing DP-CGD algorithms with less privacy-leakage.
arXiv Detail & Related papers (2024-09-27T18:00:38Z) - Conditional Density Estimations from Privacy-Protected Data [0.0]
We propose simulation-based inference methods from privacy-protected datasets.
We illustrate our methods on discrete time-series data under an infectious disease model and with ordinary linear regression models.
arXiv Detail & Related papers (2023-10-19T14:34:17Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Privacy-preserving Non-negative Matrix Factorization with Outliers [2.84279467589473]
We focus on developing a Non-negative matrix factorization algorithm in the privacy-preserving framework.
We propose a novel privacy-preserving algorithm for non-negative matrix factorisation capable of operating on private data.
We show our proposed framework's performance in six real data sets.
arXiv Detail & Related papers (2022-11-02T19:42:18Z) - Privacy Induces Robustness: Information-Computation Gaps and Sparse Mean
Estimation [8.9598796481325]
We investigate the consequences of this observation for both algorithms and computational complexity across different statistical problems.
We establish an information-computation gap for private sparse mean estimation.
We also give evidence for privacy-induced information-computation gaps for several other statistics and learning problems.
arXiv Detail & Related papers (2022-11-01T20:03:41Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Efficient Mean Estimation with Pure Differential Privacy via a
Sum-of-Squares Exponential Mechanism [16.996435043565594]
We give the first-time algorithm to estimate the mean of a $d$-positive probability distribution with covariance from $tildeO(d)$ independent samples subject to pure differential privacy.
Our main technique is a new approach to use the powerful Sum of Squares method (SoS) to design differentially private algorithms.
arXiv Detail & Related papers (2021-11-25T09:31:15Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Privacy Preserving Recalibration under Domain Shift [119.21243107946555]
We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints.
We also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets.
arXiv Detail & Related papers (2020-08-21T18:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.