Differential Privacy in Cooperative Multiagent Planning
- URL: http://arxiv.org/abs/2301.08811v1
- Date: Fri, 20 Jan 2023 21:36:57 GMT
- Title: Differential Privacy in Cooperative Multiagent Planning
- Authors: Bo Chen and Calvin Hawkins and Mustafa O. Karabag and Cyrus Neary and
Matthew Hale and Ufuk Topcu
- Abstract summary: We study sequential decision-making problems formulated as cooperative Markov games with reach-avoid objectives.
We apply a differential privacy mechanism to privatize agents' communicated symbolic state trajectories.
We synthesize policies that are robust to privacy by reducing the value of the total correlation.
- Score: 27.194032494266086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy-aware multiagent systems must protect agents' sensitive data while
simultaneously ensuring that agents accomplish their shared objectives. Towards
this goal, we propose a framework to privatize inter-agent communications in
cooperative multiagent decision-making problems. We study sequential
decision-making problems formulated as cooperative Markov games with
reach-avoid objectives. We apply a differential privacy mechanism to privatize
agents' communicated symbolic state trajectories, and then we analyze tradeoffs
between the strength of privacy and the team's performance. For a given level
of privacy, this tradeoff is shown to depend critically upon the total
correlation among agents' state-action processes. We synthesize policies that
are robust to privacy by reducing the value of the total correlation. Numerical
experiments demonstrate that the team's performance under these policies
decreases by only 3 percent when comparing private versus non-private
implementations of communication. By contrast, the team's performance decreases
by roughly 86 percent when using baseline policies that ignore total
correlation and only optimize team performance.
Related papers
- MAGPIE: A benchmark for Multi-AGent contextual PrIvacy Evaluation [61.92403071137653]
Existing privacy benchmarks only focus on simplistic, single-turn interactions where private information can be trivially omitted without affecting task outcomes.<n>We introduce MAGPIE, a novel benchmark designed to evaluate privacy understanding and preservation in multi-agent collaborative, non-adversarial scenarios.<n>Our evaluation reveals that state-of-the-art agents, including GPT-5 and Gemini 2.5-Pro, exhibit significant privacy leakage.
arXiv Detail & Related papers (2025-10-16T23:12:12Z) - The Sum Leaks More Than Its Parts: Compositional Privacy Risks and Mitigations in Multi-Agent Collaboration [72.33801123508145]
Large language models (LLMs) are integral to multi-agent systems.<n>Privacy risks emerge that extend beyond memorization, direct inference, or single-turn evaluations.<n>In particular, seemingly innocuous responses, when composed across interactions, can cumulatively enable adversaries to recover sensitive information.
arXiv Detail & Related papers (2025-09-16T16:57:25Z) - Multi-Hop Privacy Propagation for Differentially Private Federated Learning in Social Networks [1.5653612447564105]
Federated learning enables collaborative model training across decentralized clients without sharing local data.<n>A client's privacy loss depends not only on its privacy protection strategy but also on the privacy decisions of others.<n>We propose a socially-aware privacy-preserving FL mechanism that systematically quantifies indirect privacy leakage.
arXiv Detail & Related papers (2025-08-11T06:53:32Z) - MAGPIE: A dataset for Multi-AGent contextual PrIvacy Evaluation [54.410825977390274]
Existing benchmarks to evaluate contextual privacy in LLM-agents primarily assess single-turn, low-complexity tasks.<n>We first present a benchmark - MAGPIE comprising 158 real-life high-stakes scenarios across 15 domains.<n>We then evaluate the current state-of-the-art LLMs on their understanding of contextually private data and their ability to collaborate without violating user privacy.
arXiv Detail & Related papers (2025-06-25T18:04:25Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Group Decision-Making among Privacy-Aware Agents [2.4401219403555814]
Preserving individual privacy and enabling efficient social learning are both important desiderata but seem fundamentally at odds with each other.
We do so by controlling information leakage using rigorous statistical guarantees that are based on differential privacy (DP)
Our results flesh out the nature of the trade-offs in both cases between the quality of the group decision outcomes, learning accuracy, communication cost, and the level of privacy protections that the agents are afforded.
arXiv Detail & Related papers (2024-02-13T01:38:01Z) - Privacy-Engineered Value Decomposition Networks for Cooperative
Multi-Agent Reinforcement Learning [19.504842607744457]
In cooperative multi-agent reinforcement learning, a team of agents must jointly optimize the team's long-term rewards to learn a designated task.
Privacy-Engineered Value Decomposition Networks (PE-VDN) models multi-agent coordination while safeguarding the confidentiality of the agents' environment interaction data.
We implement PE-VDN in StarCraft Multi-Agent Competition (SMAC) and show that it achieves 80% of Vanilla VDN's win rate.
arXiv Detail & Related papers (2023-09-13T02:50:57Z) - DPMAC: Differentially Private Communication for Cooperative Multi-Agent
Reinforcement Learning [21.961558461211165]
Communication lays the foundation for cooperation in human society and in multi-agent reinforcement learning (MARL)
We propose the textitdifferentially private multi-agent communication (DPMAC) algorithm, which protects the sensitive information of individual agents by equipping each agent with a local message sender with rigorous $(epsilon, delta)$-differential privacy guarantee.
We prove the existence of a Nash equilibrium in cooperative MARL with privacy-preserving communication, which suggests that this problem is game-theoretically learnable.
arXiv Detail & Related papers (2023-08-19T04:26:23Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Cooperative Actor-Critic via TD Error Aggregation [12.211031907519827]
We introduce a decentralized actor-critic algorithm with TD error aggregation that does not violate privacy issues.
We provide a convergence analysis under diminishing step size to verify that the agents maximize the team-average objective function.
arXiv Detail & Related papers (2022-07-25T21:10:39Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Cooperative Multi-Agent Actor-Critic for Privacy-Preserving Load
Scheduling in a Residential Microgrid [71.17179010567123]
We propose a privacy-preserving multi-agent actor-critic framework where the decentralized actors are trained with distributed critics.
The proposed framework can preserve the privacy of the households while simultaneously learn the multi-agent credit assignment mechanism implicitly.
arXiv Detail & Related papers (2021-10-06T14:05:26Z) - On Privacy and Confidentiality of Communications in Organizational
Graphs [3.5270468102327004]
This work shows how confidentiality is distinct from privacy in an enterprise context.
It aims to formulate an approach to preserving confidentiality while leveraging principles from differential privacy.
arXiv Detail & Related papers (2021-05-27T19:45:56Z) - Partial Disclosure of Private Dependencies in Privacy Preserving
Planning [0.0]
In collaborative privacy preserving planning, a group of agents jointly creates a plan to achieve a set of goals.
Previous work in CPPP does not limit the disclosure of such dependencies.
We explicitly limit the amount of disclosed dependencies, allowing agents to publish only a part of their private dependencies.
arXiv Detail & Related papers (2021-02-14T16:10:08Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Differentially Private Multi-Agent Planning for Logistic-like Problems [70.3758644421664]
This paper proposes a novel strong privacy-preserving planning approach for logistic-like problems.
Two challenges are addressed: 1) simultaneously achieving strong privacy, completeness and efficiency, and 2) addressing communication constraints.
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning.
arXiv Detail & Related papers (2020-08-16T03:43:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.