Differentially Private Computation of Basic Reproduction Numbers in Networked Epidemic Models
- URL: http://arxiv.org/abs/2309.17284v1
- Date: Fri, 29 Sep 2023 14:38:02 GMT
- Title: Differentially Private Computation of Basic Reproduction Numbers in Networked Epidemic Models
- Authors: Bo Chen, Baike She, Calvin Hawkins, Alex Benvenuti, Brandon Fallin, Philip E. Paré, Matthew Hale,
- Abstract summary: We develop a framework to compute and release the reproduction number of a networked epidemic model in a differentially private way.
We show that, under real-world conditions, we can compute $R_0$ in a differentially private way while incurring errors as low as $7.6%$ on average.
- Score: 3.1966459264817875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The basic reproduction number of a networked epidemic model, denoted $R_0$, can be computed from a network's topology to quantify epidemic spread. However, disclosure of $R_0$ risks revealing sensitive information about the underlying network, such as an individual's relationships within a social network. Therefore, we propose a framework to compute and release $R_0$ in a differentially private way. First, we provide a new result that shows how $R_0$ can be used to bound the level of penetration of an epidemic within a single community as a motivation for the need of privacy, which may also be of independent interest. We next develop a privacy mechanism to formally safeguard the edge weights in the underlying network when computing $R_0$. Then we formalize tradeoffs between the level of privacy and the accuracy of values of the privatized $R_0$. To show the utility of the private $R_0$ in practice, we use it to bound this level of penetration under privacy, and concentration bounds on these analyses show they remain accurate with privacy implemented. We apply our results to real travel data gathered during the spread of COVID-19, and we show that, under real-world conditions, we can compute $R_0$ in a differentially private way while incurring errors as low as $7.6\%$ on average.
Related papers
- PrIsing: Privacy-Preserving Peer Effect Estimation via Ising Model [3.4308998211298314]
We present a novel $(varepsilon,delta)$-differentially private algorithm specifically designed to protect the privacy of individual agents' outcomes.
Our algorithm allows for precise estimation of the natural parameter using a single network through an objective perturbation technique.
arXiv Detail & Related papers (2024-01-29T21:56:39Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - To share or not to share: What risks would laypeople accept to give sensitive data to differentially-private NLP systems? [14.586789605230672]
We argue that determining the $varepsilon$ value should not be solely in the hands of researchers or system developers.
We conduct a behavioral experiment (311 lay participants) to study the behavior of people in uncertain decision-making situations.
arXiv Detail & Related papers (2023-07-13T12:06:48Z) - Probing the Transition to Dataset-Level Privacy in ML Models Using an
Output-Specific and Data-Resolved Privacy Profile [23.05994842923702]
We study a privacy metric that quantifies the extent to which a model trained on a dataset using a Differential Privacy mechanism is covered" by each of the distributions resulting from training on neighboring datasets.
We show that the privacy profile can be used to probe an observed transition to indistinguishability that takes place in the neighboring distributions as $epsilon$ decreases.
arXiv Detail & Related papers (2023-06-27T20:39:07Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Learning from Mixtures of Private and Public Populations [25.365515662502784]
We study a new model of supervised learning under privacy constraints.
The goal is to design a learning algorithm that satisfies differential privacy only with respect to private examples.
arXiv Detail & Related papers (2020-08-01T20:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.