"I need a better description'': An Investigation Into User Expectations
For Differential Privacy
- URL: http://arxiv.org/abs/2110.06452v1
- Date: Wed, 13 Oct 2021 02:36:37 GMT
- Title: "I need a better description'': An Investigation Into User Expectations
For Differential Privacy
- Authors: Rachel Cummings, Gabriel Kaptchuk, and Elissa M. Redmiles
- Abstract summary: We explore users' privacy expectations related to differential privacy.
We find that users care about the kinds of information leaks against which differential privacy protects.
We find that the ways in which differential privacy is described in-the-wild haphazardly set users' privacy expectations.
- Score: 31.352325485393074
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite recent widespread deployment of differential privacy, relatively
little is known about what users think of differential privacy. In this work,
we seek to explore users' privacy expectations related to differential privacy.
Specifically, we investigate (1) whether users care about the protections
afforded by differential privacy, and (2) whether they are therefore more
willing to share their data with differentially private systems. Further, we
attempt to understand (3) users' privacy expectations of the differentially
private systems they may encounter in practice and (4) their willingness to
share data in such systems. To answer these questions, we use a series of
rigorously conducted surveys (n=2424).
We find that users care about the kinds of information leaks against which
differential privacy protects and are more willing to share their private
information when the risks of these leaks are less likely to happen.
Additionally, we find that the ways in which differential privacy is described
in-the-wild haphazardly set users' privacy expectations, which can be
misleading depending on the deployment. We synthesize our results into a
framework for understanding a user's willingness to share information with
differentially private systems, which takes into account the interaction
between the user's prior privacy concerns and how differential privacy is
described.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Confounding Privacy and Inverse Composition [32.85314813605347]
In differential privacy, sensitive information is contained in the dataset while in Pufferfish privacy, sensitive information determines data distribution.
We introduce a novel privacy notion of ($epsilon, delta$)-confounding privacy that generalizes both differential privacy and Pufferfish privacy.
arXiv Detail & Related papers (2024-08-21T21:45:13Z) - Models Matter: Setting Accurate Privacy Expectations for Local and Central Differential Privacy [14.40391109414476]
We design and evaluate new explanations of differential privacy for the local and central models.
We find that consequences-focused explanations in the style of privacy nutrition labels are a promising approach for setting accurate privacy expectations.
arXiv Detail & Related papers (2024-08-16T01:21:57Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Auditing Differentially Private Machine Learning: How Private is Private
SGD? [16.812900569416062]
We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.
We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks.
arXiv Detail & Related papers (2020-06-13T20:00:18Z) - Learning With Differential Privacy [3.618133010429131]
Differential privacy comes to the rescue with a proper promise of protection against leakage.
It uses a randomized response technique at the time of collection of the data which promises strong privacy with better utility.
arXiv Detail & Related papers (2020-06-10T02:04:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.