Group Decision-Making among Privacy-Aware Agents
- URL: http://arxiv.org/abs/2402.08156v4
- Date: Thu, 11 Apr 2024 14:59:39 GMT
- Title: Group Decision-Making among Privacy-Aware Agents
- Authors: Marios Papachristou, M. Amin Rahimian,
- Abstract summary: Preserving individual privacy and enabling efficient social learning are both important desiderata but seem fundamentally at odds with each other.
We do so by controlling information leakage using rigorous statistical guarantees that are based on differential privacy (DP)
Our results flesh out the nature of the trade-offs in both cases between the quality of the group decision outcomes, learning accuracy, communication cost, and the level of privacy protections that the agents are afforded.
- Score: 2.4401219403555814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How can individuals exchange information to learn from each other despite their privacy needs and security concerns? For example, consider individuals deliberating a contentious topic and being concerned about divulging their private experiences. Preserving individual privacy and enabling efficient social learning are both important desiderata but seem fundamentally at odds with each other and very hard to reconcile. We do so by controlling information leakage using rigorous statistical guarantees that are based on differential privacy (DP). Our agents use log-linear rules to update their beliefs after communicating with their neighbors. Adding DP randomization noise to beliefs provides communicating agents with plausible deniability with regard to their private information and their network neighborhoods. We consider two learning environments one for distributed maximum-likelihood estimation given a finite number of private signals and another for online learning from an infinite, intermittent signal stream. Noisy information aggregation in the finite case leads to interesting tradeoffs between rejecting low-quality states and making sure all high-quality states are accepted in the algorithm output. Our results flesh out the nature of the trade-offs in both cases between the quality of the group decision outcomes, learning accuracy, communication cost, and the level of privacy protections that the agents are afforded.
Related papers
- PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Federated Transfer Learning with Differential Privacy [21.50525027559563]
We formulate the notion of textitfederated differential privacy, which offers privacy guarantees for each data set without assuming a trusted central server.
We show that federated differential privacy is an intermediate privacy model between the well-established local and central models of differential privacy.
arXiv Detail & Related papers (2024-03-17T21:04:48Z) - DPMAC: Differentially Private Communication for Cooperative Multi-Agent
Reinforcement Learning [21.961558461211165]
Communication lays the foundation for cooperation in human society and in multi-agent reinforcement learning (MARL)
We propose the textitdifferentially private multi-agent communication (DPMAC) algorithm, which protects the sensitive information of individual agents by equipping each agent with a local message sender with rigorous $(epsilon, delta)$-differential privacy guarantee.
We prove the existence of a Nash equilibrium in cooperative MARL with privacy-preserving communication, which suggests that this problem is game-theoretically learnable.
arXiv Detail & Related papers (2023-08-19T04:26:23Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Differentially Private Federated Combinatorial Bandits with Constraints [8.390356883529172]
This work investigates a group of agents working concurrently to solve similar bandit problems while maintaining quality constraints.
We show that our algorithm provides an improvement in terms of regret while upholding quality threshold and meaningful privacy guarantees.
arXiv Detail & Related papers (2022-06-27T11:14:28Z) - "Am I Private and If So, how Many?" -- Using Risk Communication Formats
for Making Differential Privacy Understandable [0.0]
We adapt risk communication formats in conjunction with a model for the privacy risks of Differential Privacy.
We evaluate these novel privacy communication formats in a crowdsourced study.
arXiv Detail & Related papers (2022-04-08T13:30:07Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Privacy-Preserving Communication-Efficient Federated Multi-Armed Bandits [17.039484057126337]
Communication bottleneck and data privacy are two critical concerns in federated multi-armed bandit (MAB) problems.
We design the privacy-preserving communication-efficient algorithm in such problems and study the interactions among privacy, communication and learning performance in terms of the regret.
arXiv Detail & Related papers (2021-11-02T12:56:12Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.