Partial sensitivity analysis in differential privacy
- URL: http://arxiv.org/abs/2109.10582v1
- Date: Wed, 22 Sep 2021 08:29:16 GMT
- Title: Partial sensitivity analysis in differential privacy
- Authors: Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle,
Friederike Jungmann, Daniel Rueckert, Georgios Kaissis
- Abstract summary: We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
- Score: 58.730520380312676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential privacy (DP) allows the quantification of privacy loss when the
data of individuals is subjected to algorithmic processing such as machine
learning, as well as the provision of objective privacy guarantees. However,
while techniques such as individual R\'enyi DP (RDP) allow for granular,
per-person privacy accounting, few works have investigated the impact of each
input feature on the individual's privacy loss. Here we extend the view of
individual RDP by introducing a new concept we call partial sensitivity, which
leverages symbolic automatic differentiation to determine the influence of each
input feature on the gradient norm of a function. We experimentally evaluate
our approach on queries over private databases, where we obtain a feature-level
contribution of private attributes to the DP guarantee of individuals.
Furthermore, we explore our findings in the context of neural network training
on synthetic data by investigating the partial sensitivity of input pixels on
an image classification task.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Causal Inference with Differentially Private (Clustered) Outcomes [16.166525280886578]
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their responses.
We suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure.
We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Contextualize differential privacy in image database: a lightweight
image differential privacy approach based on principle component analysis
inverse [35.06571163816982]
Differential privacy (DP) has been the de-facto standard to preserve privacy-sensitive information in database.
The privacy-accuracy trade-off due to integrating DP is insufficiently demonstrated in the context of differentially-private image database.
This work aims at contextualizing DP in images by an explicit and intuitive demonstration of integrating conceptional differential privacy with images.
arXiv Detail & Related papers (2022-02-16T19:36:49Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.