Unleash the Power of Ellipsis: Accuracy-enhanced Sparse Vector Technique with Exponential Noise
- URL: http://arxiv.org/abs/2407.20068v1
- Date: Mon, 29 Jul 2024 14:54:28 GMT
- Title: Unleash the Power of Ellipsis: Accuracy-enhanced Sparse Vector Technique with Exponential Noise
- Authors: Yuhan Liu, Sheng Wang, Yixuan Liu, Feifei Li, Hong Chen,
- Abstract summary: The Sparse Vector Technique (SVT) is one of the most fundamental tools in differential privacy (DP)
Unlike the typical private query releases that directly publicize the noisy query results, SVT is less informative.
We provide a new privacy analysis for SVT by considering its less informative nature.
- Score: 32.79378183735303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Sparse Vector Technique (SVT) is one of the most fundamental tools in differential privacy (DP). It works as a backbone for adaptive data analysis by answering a sequence of queries on a given dataset, and gleaning useful information in a privacy-preserving manner. Unlike the typical private query releases that directly publicize the noisy query results, SVT is less informative -- it keeps the noisy query results to itself and only reveals a binary bit for each query, indicating whether the query result surpasses a predefined threshold. To provide a rigorous DP guarantee for SVT, prior works in the literature adopt a conservative privacy analysis by assuming the direct disclosure of noisy query results as in typical private query releases. This approach, however, hinders SVT from achieving higher query accuracy due to an overestimation of the privacy risks, which further leads to an excessive noise injection using the Laplacian or Gaussian noise for perturbation. Motivated by this, we provide a new privacy analysis for SVT by considering its less informative nature. Our analysis results not only broaden the range of applicable noise types for perturbation in SVT, but also identify the exponential noise as optimal among all evaluated noises (which, however, is usually deemed non-applicable in prior works). The main challenge in applying exponential noise to SVT is mitigating the sub-optimal performance due to the bias introduced by noise distributions. To address this, we develop a utility-oriented optimal threshold correction method and an appending strategy, which enhances the performance of SVT by increasing the precision and recall, respectively. The effectiveness of our proposed methods is substantiated both theoretically and empirically, demonstrating significant improvements up to $50\%$ across evaluated metrics.
Related papers
- SoftPatch: Unsupervised Anomaly Detection with Noisy Data [67.38948127630644]
This paper considers label-level noise in image sensory anomaly detection for the first time.
We propose a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level.
Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset.
arXiv Detail & Related papers (2024-03-21T08:49:34Z) - On the Privacy of Selection Mechanisms with Gaussian Noise [44.577599546904736]
We revisit the analysis of Report Noisy Max and Above Threshold with Gaussian noise.
We find that it is possible to provide pure ex-ante DP bounds for Report Noisy Max and pure ex-post DP bounds for Above Threshold.
arXiv Detail & Related papers (2024-02-09T02:11:25Z) - Adaptive Differential Privacy in Federated Learning: A Priority-Based
Approach [0.0]
Federated learning (FL) develops global models without direct access to local datasets.
DP offers a framework that gives a privacy guarantee by adding certain amounts of noise to parameters.
We propose adaptive noise addition in FL which decides the value of injected noise based on features' relative importance.
arXiv Detail & Related papers (2024-01-04T03:01:15Z) - A Corrected Expected Improvement Acquisition Function Under Noisy
Observations [22.63212972670109]
Sequential of expected improvement (EI) is one of the most widely used policies in Bayesian optimization.
The uncertainty associated with the incumbent solution is often neglected in many analytic EI-type methods.
We propose a modification of EI that corrects its closed-form expression by incorporating the covariance information provided by the Gaussian Process (GP) model.
arXiv Detail & Related papers (2023-10-08T13:50:39Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Optimum Noise Mechanism for Differentially Private Queries in Discrete Finite Sets [3.5379819043314176]
We introduce a novel framework for designing an optimal noise Mass Probability Function tailored to discrete and finite query sets.
Our framework seeks to optimize the noise distribution under an arbitrary $(epsilon, delta)$ constraint, thereby enhancing the accuracy and utility of the response.
Numerical experiments highlight the superior performance of our proposed optimal mechanisms compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-11-23T05:24:34Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.