PrIsing: Privacy-Preserving Peer Effect Estimation via Ising Model
- URL: http://arxiv.org/abs/2401.16596v1
- Date: Mon, 29 Jan 2024 21:56:39 GMT
- Title: PrIsing: Privacy-Preserving Peer Effect Estimation via Ising Model
- Authors: Abhinav Chakraborty, Anirban Chatterjee and Abhinandan Dalal
- Abstract summary: We present a novel $(varepsilon,delta)$-differentially private algorithm specifically designed to protect the privacy of individual agents' outcomes.
Our algorithm allows for precise estimation of the natural parameter using a single network through an objective perturbation technique.
- Score: 3.4308998211298314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Ising model, originally developed as a spin-glass model for ferromagnetic
elements, has gained popularity as a network-based model for capturing
dependencies in agents' outputs. Its increasing adoption in healthcare and the
social sciences has raised privacy concerns regarding the confidentiality of
agents' responses. In this paper, we present a novel
$(\varepsilon,\delta)$-differentially private algorithm specifically designed
to protect the privacy of individual agents' outcomes. Our algorithm allows for
precise estimation of the natural parameter using a single network through an
objective perturbation technique. Furthermore, we establish regret bounds for
this algorithm and assess its performance on synthetic datasets and two
real-world networks: one involving HIV status in a social network and the other
concerning the political leaning of online blogs.
Related papers
- TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Differentially Private Decentralized Learning with Random Walks [15.862152253607496]
We characterize the privacy guarantees of decentralized learning with random walk algorithms, where a model is updated by traveling from one node to another along the edges of a communication graph.
Our results reveal that random walk algorithms tends to yield better privacy guarantees than gossip algorithms for nodes close from each other.
arXiv Detail & Related papers (2024-02-12T08:16:58Z) - Chained-DP: Can We Recycle Privacy Budget? [18.19895364709435]
We propose a novel Chained-DP framework enabling users to carry out data aggregation sequentially to recycle the privacy budget.
We show the mathematical nature of the sequential game, solve its Nash Equilibrium, and design an incentive mechanism with provable economic properties.
Our numerical simulation validates the effectiveness of Chained-DP, showing that it can significantly save privacy budget and lower estimation error compared to the traditional LDP mechanism.
arXiv Detail & Related papers (2023-09-12T08:07:59Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Additive Logistic Mechanism for Privacy-Preserving Self-Supervised
Learning [26.783944764936994]
We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms.
We design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights.
We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing.
arXiv Detail & Related papers (2022-05-25T01:33:52Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - A Distributed Privacy-Preserving Learning Dynamics in General Social
Networks [22.195555664935814]
We study a distributed privacy-preserving learning problem in general social networks.
We propose a four-staged distributed social learning algorithm.
We provide answers to two fundamental questions about the performance of our algorithm.
arXiv Detail & Related papers (2020-11-15T04:00:45Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.