Actual Knowledge Gain as Privacy Loss in Local Privacy Accounting
- URL: http://arxiv.org/abs/2307.08159v3
- Date: Wed, 30 Apr 2025 06:34:01 GMT
- Title: Actual Knowledge Gain as Privacy Loss in Local Privacy Accounting
- Authors: Mingen Pan,
- Abstract summary: This paper establishes the equivalence between Local Differential Privacy (LDP) and a global limit on learning any knowledge specific to an object queried.<n>An output from an LDP query is not necessarily required to provide exact amount of knowledge equal to the upper bound of the learning limit.<n>The least upper bound on the actual knowledge gain is derived and referred to as realized privacy loss.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper establishes the equivalence between Local Differential Privacy (LDP) and a global limit on learning any knowledge specific to a queried object. However, an output from an LDP query is not necessarily required to provide exact amount of knowledge equal to the upper bound of the learning limit. The LDP guarantee can overestimate the amount of knowledge gained by an analyst from some outputs. To address this issue, the least upper bound on the actual knowledge gain is derived and referred to as realized privacy loss. This measure is also shown to serve as an upper bound for the actual g-leakage in quantitative information flow. The gap between the LDP guarantee and realized privacy loss motivates the exploration of a more efficient privacy accounting for fully adaptive composition, where an adversary adaptively selects queries based on prior results. The Bayesian Privacy Filter is introduced to continuously accept queries until the realized privacy loss of the composed queries equals the LDP guarantee of the composition, enabling the full utilization of the privacy budget of an object. The realized privacy loss also functions as a privacy odometer for the composed queries, allowing the remaining privacy budget to accurately represent the capacity to accept new queries. Additionally, a branch-and-bound method is devised to compute the realized privacy loss when querying against continuous values. Experimental results indicate that Bayesian Privacy Filter outperforms the basic composition by a factor of one to four when composing linear and logistic regressions.
Related papers
- PrivaCI-Bench: Evaluating Privacy with Contextual Integrity and Legal Compliance [44.287734754038254]
We present PrivaCI-Bench, a contextual privacy evaluation benchmark for generative large language models (LLMs)<n>We evaluate the latest LLMs, including the recent reasoner models QwQ-32B and Deepseek R1.<n>Our experimental results suggest that though LLMs can effectively capture key CI parameters inside a given context, they still require further advancements for privacy compliance.
arXiv Detail & Related papers (2025-02-24T10:49:34Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Differential Confounding Privacy and Inverse Composition [32.85314813605347]
We introduce Differential Confounding Privacy (DCP), a framework that generalizes Differential Privacy (DP)
We show that DCP mechanisms retain privacy guarantees under composition, but they lack the graceful compositional properties of DP.
We propose an Inverse Composition (IC) framework, where a leader-follower model optimally designs a privacy strategy to achieve target guarantees without relying on worst-case privacy proofs.
arXiv Detail & Related papers (2024-08-21T21:45:13Z) - Private Optimal Inventory Policy Learning for Feature-based Newsvendor with Unknown Demand [13.594765018457904]
This paper introduces a novel approach to estimate a privacy-preserving optimal inventory policy within the f-differential privacy framework.
We develop a clipped noisy gradient descent algorithm based on convolution smoothing for optimal inventory estimation.
Our numerical experiments demonstrate that the proposed new method can achieve desirable privacy protection with a marginal increase in cost.
arXiv Detail & Related papers (2024-04-23T19:15:43Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.