Noise Reduction for Pufferfish Privacy: A Practical Noise Calibration Method
- URL: http://arxiv.org/abs/2601.06385v1
- Date: Sat, 10 Jan 2026 02:01:45 GMT
- Title: Noise Reduction for Pufferfish Privacy: A Practical Noise Calibration Method
- Authors: Wenjin Yang, Ni Ding, Zijian Zhang, Jing Sun, Zhen Li, Yan Wu, Jiahang Sun, Haotian Lin, Yong Liu, Jincheng An, Liehuang Zhu,
- Abstract summary: This paper builds on the existing $1$-Wasserstein (Kantorovich) mechanism by alleviating the existing overly strict condition that leads to excessive noise.<n>We prove that a strict noise reduction by our approach always exists compared to $1$-Wasserstein mechanism for all privacy budgets $$ and prior beliefs.<n> Experimental results on three real-world datasets demonstrate $47%$ to $87%$ improvement in data utility.
- Score: 39.60741423066451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a relaxed noise calibration method to enhance data utility while attaining pufferfish privacy. This work builds on the existing $1$-Wasserstein (Kantorovich) mechanism by alleviating the existing overly strict condition that leads to excessive noise, and proposes a practical mechanism design algorithm as a general solution. We prove that a strict noise reduction by our approach always exists compared to $1$-Wasserstein mechanism for all privacy budgets $ε$ and prior beliefs, and the noise reduction (also represents improvement on data utility) gains increase significantly for low privacy budget situations--which are commonly seen in real-world deployments. We also analyze the variation and optimality of the noise reduction with different prior distributions. Moreover, all the properties of the noise reduction still exist in the worst-case $1$-Wasserstein mechanism we introduced, when the additive noise is largest. We further show that the worst-case $1$-Wasserstein mechanism is equivalent to the $\ell_1$-sensitivity method. Experimental results on three real-world datasets demonstrate $47\%$ to $87\%$ improvement in data utility.
Related papers
- LoRA and Privacy: When Random Projections Help (and When They Don't) [55.65932772290123]
We introduce the (Wishart) projection mechanism, a randomized map of the form $S mapsto M f(S)$ with $M sim W_d (1/r I_d, r)$ and study its differential privacy properties.<n>For vector-valued queries $f$, we prove non-asymptotic DP guarantees without any additive noise, showing that Wishart randomness alone can suffice.<n>For matrix-valued queries, however, we establish a sharp negative result: in the noise-free setting, the mechanism is not DP, and we demonstrate its vulnerability.
arXiv Detail & Related papers (2026-01-29T13:43:37Z) - Sequential Subspace Noise Injection Prevents Accuracy Collapse in Certified Unlearning [28.628342735283752]
Certified unlearning based on differential privacy offers strong guarantees but remains largely impractical.<n>We propose sequential noise scheduling, which distributes the noise budget across subspaces of the parameter space.<n>We extend the analysis of noisy fine-tuning to the subspace setting, proving that the same $(varepsilon,)$ privacy budget is retained.
arXiv Detail & Related papers (2026-01-08T17:23:13Z) - DPSR: Differentially Private Sparse Reconstruction via Multi-Stage Denoising for Recommender Systems [5.237172334460829]
Differential privacy has emerged as the gold standard for protecting user data in recommender systems.<n>Existing privacy-preserving mechanisms face a fundamental challenge as privacy budgets tighten.<n>We introduce DPSR, a novel three-stage denoising framework.
arXiv Detail & Related papers (2025-12-22T00:43:29Z) - Sign Operator for Coping with Heavy-Tailed Noise in Non-Convex Optimization: High Probability Bounds Under $(L_0, L_1)$-Smoothness [74.18546828528298]
We show that SignSGD with Majority Voting can robustly work on the whole range of complexity with $kappakappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappakappakappa-1right, kappa
arXiv Detail & Related papers (2025-02-11T19:54:11Z) - Attack-Aware Noise Calibration for Differential Privacy [11.222654178949234]
Differential privacy (DP) is a widely used approach for mitigating privacy risks when training machine learning models on sensitive data.
The scale of the added noise is critical, as it determines the trade-off between privacy and utility.
We show that first calibrating the noise scale to a privacy budget $varepsilon$, and then translating epsilon to attack risk leads to overly conservative risk assessments.
arXiv Detail & Related papers (2024-07-02T11:49:59Z) - Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - Dynamic Global Sensitivity for Differentially Private Contextual Bandits [37.413361483987835]
We propose a differentially private linear contextual bandit algorithm, via a tree-based mechanism to add Laplace or Gaussian noise to model parameters.
Our key insight is that as the model converges during online update, the global sensitivity of its parameters shrinks over time.
We provide a rigorous theoretical analysis over the amount of noise added via dynamic global sensitivity and the corresponding upper regret bound of our proposed algorithm.
arXiv Detail & Related papers (2022-08-30T22:09:08Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Differential privacy for symmetric log-concave mechanisms [0.0]
Adding random noise to database query results is an important tool for achieving privacy.
We provide a sufficient and necessary condition for $(epsilon, delta)$-differential privacy for all symmetric and log-concave noise densities.
arXiv Detail & Related papers (2022-02-23T10:20:29Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.