"Am I Private and If So, how Many?" -- Using Risk Communication Formats
for Making Differential Privacy Understandable
- URL: http://arxiv.org/abs/2204.04061v4
- Date: Thu, 22 Jun 2023 12:23:07 GMT
- Title: "Am I Private and If So, how Many?" -- Using Risk Communication Formats
for Making Differential Privacy Understandable
- Authors: Daniel Franzen (1), Saskia Nu\~nez von Voigt (2), Peter S\"orries (1),
Florian Tschorsch (2), Claudia M\"uller-Birn (1) ((1) Freie Universit\"at
Berlin, (2) Technische Universit\"at Berlin)
- Abstract summary: We adapt risk communication formats in conjunction with a model for the privacy risks of Differential Privacy.
We evaluate these novel privacy communication formats in a crowdsourced study.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Mobility data is essential for cities and communities to identify areas for
necessary improvement. Data collected by mobility providers already contains
all the information necessary, but privacy of the individuals needs to be
preserved. Differential privacy (DP) defines a mathematical property which
guarantees that certain limits of privacy are preserved while sharing such
data, but its functionality and privacy protection are difficult to explain to
laypeople. In this paper, we adapt risk communication formats in conjunction
with a model for the privacy risks of DP. The result are privacy notifications
which explain the risk to an individual's privacy when using DP, rather than
DP's functionality. We evaluate these novel privacy communication formats in a
crowdsourced study. We find that they perform similarly to the best performing
DP communications used currently in terms of objective understanding, but did
not make our participants as confident in their understanding. We also
discovered an influence, similar to the Dunning-Kruger effect, of the
statistical numeracy on the effectiveness of some of our privacy communication
formats and the DP communication format used currently. These results generate
hypotheses in multiple directions, for example, toward the use of risk
visualization to improve the understandability of our formats or toward
adaptive user interfaces which tailor the risk communication to the
characteristics of the reader.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Group Decision-Making among Privacy-Aware Agents [2.4401219403555814]
Preserving individual privacy and enabling efficient social learning are both important desiderata but seem fundamentally at odds with each other.
We do so by controlling information leakage using rigorous statistical guarantees that are based on differential privacy (DP)
Our results flesh out the nature of the trade-offs in both cases between the quality of the group decision outcomes, learning accuracy, communication cost, and the level of privacy protections that the agents are afforded.
arXiv Detail & Related papers (2024-02-13T01:38:01Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - DPMAC: Differentially Private Communication for Cooperative Multi-Agent
Reinforcement Learning [21.961558461211165]
Communication lays the foundation for cooperation in human society and in multi-agent reinforcement learning (MARL)
We propose the textitdifferentially private multi-agent communication (DPMAC) algorithm, which protects the sensitive information of individual agents by equipping each agent with a local message sender with rigorous $(epsilon, delta)$-differential privacy guarantee.
We prove the existence of a Nash equilibrium in cooperative MARL with privacy-preserving communication, which suggests that this problem is game-theoretically learnable.
arXiv Detail & Related papers (2023-08-19T04:26:23Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - On Privacy and Confidentiality of Communications in Organizational
Graphs [3.5270468102327004]
This work shows how confidentiality is distinct from privacy in an enterprise context.
It aims to formulate an approach to preserving confidentiality while leveraging principles from differential privacy.
arXiv Detail & Related papers (2021-05-27T19:45:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.