Partial Disclosure of Private Dependencies in Privacy Preserving
Planning
- URL: http://arxiv.org/abs/2102.07185v1
- Date: Sun, 14 Feb 2021 16:10:08 GMT
- Title: Partial Disclosure of Private Dependencies in Privacy Preserving
Planning
- Authors: Rotem Lev Lehman (1), Guy Shani (1), Roni Stern (1 and 2) ((1)
Software and Information Systems Engineering, Ben Gurion University of the
Negev, Be'er Sheva, Israel, (2) Palo Alto Research Center, Palo Alto, CA,
USA)
- Abstract summary: In collaborative privacy preserving planning, a group of agents jointly creates a plan to achieve a set of goals.
Previous work in CPPP does not limit the disclosure of such dependencies.
We explicitly limit the amount of disclosed dependencies, allowing agents to publish only a part of their private dependencies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In collaborative privacy preserving planning (CPPP), a group of agents
jointly creates a plan to achieve a set of goals while preserving each others'
privacy. During planning, agents often reveal the private dependencies between
their public actions to other agents, that is, which public action facilitates
the preconditions of another public action. Previous work in CPPP does not
limit the disclosure of such dependencies. In this paper, we explicitly limit
the amount of disclosed dependencies, allowing agents to publish only a part of
their private dependencies. We investigate different strategies for deciding
which dependencies to publish, and how they affect the ability to find
solutions. We evaluate the ability of two solvers -- distribute forward search
and centralized planning based on a single-agent projection -- to produce plans
under this constraint. Experiments over standard CPPP domains show that the
proposed dependency-sharing strategies enable generating plans while sharing
only a small fraction of all private dependencies.
Related papers
- MAGPIE: A dataset for Multi-AGent contextual PrIvacy Evaluation [54.410825977390274]
Existing benchmarks to evaluate contextual privacy in LLM-agents primarily assess single-turn, low-complexity tasks.<n>We first present a benchmark - MAGPIE comprising 158 real-life high-stakes scenarios across 15 domains.<n>We then evaluate the current state-of-the-art LLMs on their understanding of contextually private data and their ability to collaborate without violating user privacy.
arXiv Detail & Related papers (2025-06-25T18:04:25Z) - DPolicy: Managing Privacy Risks Across Multiple Releases with Differential Privacy [44.27723721899118]
We present DPolicy, a system designed to manage cumulative privacy risks across multiple data releases using Differential Privacy (DP)<n>Unlike traditional approaches that treat each release in isolation or rely on a single (global) DP guarantee, our system employs a flexible framework that considers multiple DP guarantees simultaneously.<n>DPolicy introduces a high-level policy language to formalize privacy guarantees, making traditionally implicit assumptions on scopes and contexts explicit.
arXiv Detail & Related papers (2025-05-10T19:49:51Z) - On the Differential Privacy and Interactivity of Privacy Sandbox Reports [78.85958224681858]
The Privacy Sandbox initiative from Google includes APIs for enabling privacy-preserving advertising functionalities.
We provide an abstract model for analyzing the privacy of these APIs and show that they satisfy a formal DP guarantee.
arXiv Detail & Related papers (2024-12-22T08:22:57Z) - Differentially Private Reinforcement Learning with Self-Play [18.124829682487558]
We study the problem of multi-agent reinforcement learning (multi-agent RL) with differential privacy (DP) constraints.
We first extend the definitions of Joint DP (JDP) and Local DP (LDP) to two-player zero-sum episodic Markov Games.
We design a provably efficient algorithm based on optimistic Nash value and privatization of Bernstein-type bonuses.
arXiv Detail & Related papers (2024-04-11T08:42:51Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Optimal Private Discrete Distribution Estimation with One-bit Communication [63.413106413939836]
We consider a private discrete distribution estimation problem with one-bit communication constraint.
We characterize the first-orders of the worst-case trade-off under the one-bit communication constraint.
These results demonstrate the optimal dependence of the privacy-utility trade-off under the one-bit communication constraint.
arXiv Detail & Related papers (2023-10-17T05:21:19Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - Differential Privacy in Cooperative Multiagent Planning [27.194032494266086]
We study sequential decision-making problems formulated as cooperative Markov games with reach-avoid objectives.
We apply a differential privacy mechanism to privatize agents' communicated symbolic state trajectories.
We synthesize policies that are robust to privacy by reducing the value of the total correlation.
arXiv Detail & Related papers (2023-01-20T21:36:57Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Differentially Private Reinforcement Learning with Linear Function
Approximation [3.42658286826597]
We study regret minimization in finite-horizon Markov decision processes (MDPs) under the constraints of differential privacy (DP)
Our results are achieved via a general procedure for learning in linear mixture MDPs under changing regularizers.
arXiv Detail & Related papers (2022-01-18T15:25:24Z) - Privately Publishable Per-instance Privacy [21.775752827149383]
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP)
We analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
arXiv Detail & Related papers (2021-11-03T15:17:29Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Differentially Private Multi-Agent Planning for Logistic-like Problems [70.3758644421664]
This paper proposes a novel strong privacy-preserving planning approach for logistic-like problems.
Two challenges are addressed: 1) simultaneously achieving strong privacy, completeness and efficiency, and 2) addressing communication constraints.
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning.
arXiv Detail & Related papers (2020-08-16T03:43:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.