A Graph Symmetrisation Bound on Channel Information Leakage under
Blowfish Privacy
- URL: http://arxiv.org/abs/2007.05975v3
- Date: Wed, 13 Oct 2021 12:42:54 GMT
- Title: A Graph Symmetrisation Bound on Channel Information Leakage under
Blowfish Privacy
- Authors: Tobias Edwards, Benjamin I. P. Rubinstein, Zuhe Zhang, Sanming Zhou
- Abstract summary: Blowfish privacy is a recent generalisation of differential privacy that enables improved utility while maintaining privacy policies with semantic guarantees.
This paper relates Blowfish privacy to an important measure of privacy loss of information channels from the communications theory community: min-entropy leakage.
- Score: 12.72658988801038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blowfish privacy is a recent generalisation of differential privacy that
enables improved utility while maintaining privacy policies with semantic
guarantees, a factor that has driven the popularity of differential privacy in
computer science. This paper relates Blowfish privacy to an important measure
of privacy loss of information channels from the communications theory
community: min-entropy leakage. Symmetry in an input data neighbouring relation
is central to known connections between differential privacy and min-entropy
leakage. But while differential privacy exhibits strong symmetry, Blowfish
neighbouring relations correspond to arbitrary simple graphs owing to the
framework's flexible privacy policies. To bound the min-entropy leakage of
Blowfish-private mechanisms we organise our analysis over symmetrical
partitions corresponding to orbits of graph automorphism groups. A construction
meeting our bound with asymptotic equality demonstrates tightness.
Related papers
- Confounding Privacy and Inverse Composition [32.85314813605347]
In differential privacy, sensitive information is contained in the dataset while in Pufferfish privacy, sensitive information determines data distribution.
We introduce a novel privacy notion of ($epsilon, delta$)-confounding privacy that generalizes both differential privacy and Pufferfish privacy.
arXiv Detail & Related papers (2024-08-21T21:45:13Z) - Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - Metric geometry of the privacy-utility tradeoff [7.5764890276775665]
We propose a framework for characterizing the optimal privacy-accuracy tradeoff by the metric geometry of the underlying space.
We illustrate the applicability of our privacy-accuracy tradeoff framework via a diverse set of examples of metric spaces.
arXiv Detail & Related papers (2024-05-01T05:31:53Z) - Differentially Private Decentralized Learning with Random Walks [15.862152253607496]
We characterize the privacy guarantees of decentralized learning with random walk algorithms, where a model is updated by traveling from one node to another along the edges of a communication graph.
Our results reveal that random walk algorithms tends to yield better privacy guarantees than gossip algorithms for nodes close from each other.
arXiv Detail & Related papers (2024-02-12T08:16:58Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Optimal Private Discrete Distribution Estimation with One-bit Communication [63.413106413939836]
We consider a private discrete distribution estimation problem with one-bit communication constraint.
We characterize the first-orders of the worst-case trade-off under the one-bit communication constraint.
These results demonstrate the optimal dependence of the privacy-utility trade-off under the one-bit communication constraint.
arXiv Detail & Related papers (2023-10-17T05:21:19Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.