Privately Learning Markov Random Fields
- URL: http://arxiv.org/abs/2002.09463v2
- Date: Fri, 14 Aug 2020 14:24:58 GMT
- Title: Privately Learning Markov Random Fields
- Authors: Huanyu Zhang, Gautam Kamath, Janardhan Kulkarni, Zhiwei Steven Wu
- Abstract summary: We consider the problem of learning Random Fields (including the Ising model) under the constraint of differential privacy.
We provide algorithms and lower bounds for both problems under a variety of privacy constraints.
- Score: 44.95321417724914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning Markov Random Fields (including the
prototypical example, the Ising model) under the constraint of differential
privacy. Our learning goals include both structure learning, where we try to
estimate the underlying graph structure of the model, as well as the harder
goal of parameter learning, in which we additionally estimate the parameter on
each edge. We provide algorithms and lower bounds for both problems under a
variety of privacy constraints -- namely pure, concentrated, and approximate
differential privacy. While non-privately, both learning goals enjoy roughly
the same complexity, we show that this is not the case under differential
privacy. In particular, only structure learning under approximate differential
privacy maintains the non-private logarithmic dependence on the dimensionality
of the data, while a change in either the learning goal or the privacy notion
would necessitate a polynomial dependence. As a result, we show that the
privacy constraint imposes a strong separation between these two learning
problems in the high-dimensional data regime.
Related papers
- Federated Transfer Learning with Differential Privacy [21.50525027559563]
We formulate the notion of textitfederated differential privacy, which offers privacy guarantees for each data set without assuming a trusted central server.
We show that federated differential privacy is an intermediate privacy model between the well-established local and central models of differential privacy.
arXiv Detail & Related papers (2024-03-17T21:04:48Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - On Differential Privacy and Adaptive Data Analysis with Bounded Space [76.10334958368618]
We study the space complexity of the two related fields of differential privacy and adaptive data analysis.
We show that there exists a problem P that requires exponentially more space to be solved efficiently with differential privacy.
The line of work on adaptive data analysis focuses on understanding the number of samples needed for answering a sequence of adaptive queries.
arXiv Detail & Related papers (2023-02-11T14:45:31Z) - Differentially private partitioned variational inference [28.96767727430277]
Learning a privacy-preserving model from sensitive data which are distributed across multiple devices is an increasingly important problem.
We present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution.
arXiv Detail & Related papers (2022-09-23T13:58:40Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Differentially Private Multivariate Time Series Forecasting of
Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation? [14.66445694852729]
This paper investigates the problem of forecasting multivariate aggregated human mobility while preserving the privacy of the individuals concerned.
Differential privacy, a state-of-the-art formal notion, has been used as the privacy guarantee in two different and independent steps when training deep learning models.
As shown in the results, differentially private deep learning models trained under gradient or input perturbation achieve nearly the same performance as non-private deep learning models.
arXiv Detail & Related papers (2022-05-01T10:11:04Z) - Learning to be adversarially robust and differentially private [42.7930886063265]
We study the difficulties in learning that arise from robust and differentially private optimization.
Data dimensionality dependent term introduced by private optimization compounds difficulties of learning a robust model.
Size of adversarial generalization and clipping norm in differential privacy both increase the curvature of the loss landscape, implying poorer performance.
arXiv Detail & Related papers (2022-01-06T22:33:06Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.