SLDP: Semi-Local Differential Privacy for Density-Adaptive Analytics
- URL: http://arxiv.org/abs/2602.18910v1
- Date: Sat, 21 Feb 2026 17:26:04 GMT
- Title: SLDP: Semi-Local Differential Privacy for Density-Adaptive Analytics
- Authors: Alexey Kroshnin, Alexandra Suvorikova,
- Abstract summary: We propose a novel framework, Semi-Local Differential Privacy (SLDP), that assigns a privacy region to each user based on local density.<n>We present an interactive $(varepsilon, )$-SLDP protocol, orchestrated by an honest-but-curious server over a public channel, to estimate these regions privately.
- Score: 45.88028371034407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Density-adaptive domain discretization is essential for high-utility privacy-preserving analytics but remains challenging under Local Differential Privacy (LDP) due to the privacy-budget costs associated with iterative refinement. We propose a novel framework, Semi-Local Differential Privacy (SLDP), that assigns a privacy region to each user based on local density and defines adjacency by the potential movement of a point within its privacy region. We present an interactive $(\varepsilon, δ)$-SLDP protocol, orchestrated by an honest-but-curious server over a public channel, to estimate these regions privately. Crucially, our framework decouples the privacy cost from the number of refinement iterations, allowing for high-resolution grids without additional privacy budget cost. We experimentally demonstrate the framework's effectiveness on estimation tasks across synthetic and real-world datasets.
Related papers
- A General Framework for Per-record Differential Privacy [10.959311645622632]
Per-record Differential Privacy (PrDP) addresses this by defining the privacy budget as a function of each record.<n>Existing solutions either handle specific privacy functions or adopt relaxed PrDP definitions.<n>We propose a general and practical framework that enables any standard DP mechanism to support PrDP.
arXiv Detail & Related papers (2025-11-24T11:44:10Z) - Differentially Private 2D Human Pose Estimation [6.982542225631412]
We present the first comprehensive framework for differentially private 2D human pose estimation (2D-HPE)<n>To effectively balance privacy performance, we adopt Projected DP-SGD, which projects the noisy gradients to a low-dimensional subspace.<n>Next, we incorporate Feature Differential Privacy(FDP) to selectively privatize only sensitive features while retaining public visual cues.
arXiv Detail & Related papers (2025-04-14T12:50:37Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Differential Confounding Privacy and Inverse Composition [32.85314813605347]
We introduce textitdifferential confounding privacy (DCP), a specialized form of the Pufferfish privacy framework.<n>We show that while DCP mechanisms retain privacy guarantees under composition, they lack the graceful compositional properties of DP.<n>We propose an textitInverse Composition (IC) framework, where a leader-follower model optimally designs a privacy strategy to achieve target guarantees.
arXiv Detail & Related papers (2024-08-21T21:45:13Z) - A Framework for Managing Multifaceted Privacy Leakage While Optimizing Utility in Continuous LBS Interactions [0.0]
We present several novel contributions aimed at advancing the understanding and management of privacy leakage in LBS.
Our contributions provides a more comprehensive framework for analyzing privacy concerns across different facets of location-based interactions.
arXiv Detail & Related papers (2024-04-20T15:20:01Z) - Bridging Privacy and Robustness for Trustworthy Machine Learning [6.318638597489423]
Machine learning systems require inherent robustness against data perturbations and adversarial manipulations.<n>This paper systematically investigates the intricate theoretical relationships among Local Differential Privacy (LDP) and Maximum Bayesian Privacy (MBP)<n>We bridge these privacy concepts with algorithmic robustness, particularly within the Probably Approximately Correct (PAC) learning framework.
arXiv Detail & Related papers (2024-03-25T10:06:45Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Optimal and Differentially Private Data Acquisition: Central and Local
Mechanisms [9.599356978682108]
We consider a platform's problem of collecting data from privacy sensitive users to estimate an underlying parameter of interest.
We consider two popular differential privacy settings for providing privacy guarantees for the users: central and local.
We pose the mechanism design problem as the optimal selection of an estimator and payments that will elicit truthful reporting of users' privacy sensitivities.
arXiv Detail & Related papers (2022-01-10T00:27:43Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.