Privacy Explanations - A Means to End-User Trust
- URL: http://arxiv.org/abs/2210.09706v2
- Date: Thu, 20 Oct 2022 06:35:08 GMT
- Title: Privacy Explanations - A Means to End-User Trust
- Authors: Wasja Brunotte, Alexander Specht, Larissa Chazette, Kurt Schneider
- Abstract summary: We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
- Score: 64.7066037969487
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Software systems are ubiquitous, and their use is ingrained in our everyday
lives. They enable us to get in touch with people quickly and easily, support
us in gathering information, and help us perform our daily tasks. In return, we
provide these systems with a large amount of personal information, often
unaware that this is jeopardizing our privacy. End users are typically unaware
of what data is collected, for what purpose, who has access to it, and where
and how it is stored. To address this issue, we looked into how explainability
might help to tackle this problem. We created privacy explanations that aim to
help to clarify to end users why and for what purposes specific data is
required. We asked end users about privacy explanations in a survey and found
that the majority of respondents (91.6 \%) are generally interested in
receiving privacy explanations. Our findings reveal that privacy explanations
can be an important step towards increasing trust in software systems and can
increase the privacy awareness of end users. These findings are a significant
step in developing privacy-aware systems and incorporating usable privacy
features into them, assisting users in protecting their privacy.
Related papers
- PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration [18.11846784025521]
PrivacyRestore is a plug-and-play method to protect the privacy of user inputs during inference.
We create three datasets, covering medical and legal domains, to evaluate the effectiveness of PrivacyRestore.
arXiv Detail & Related papers (2024-06-03T14:57:39Z) - Understanding How to Inform Blind and Low-Vision Users about Data Privacy through Privacy Question Answering Assistants [23.94659412932831]
Blind and low-vision (BLV) users face heightened security and privacy risks, but their risk mitigation is often insufficient.
Our study sheds light on BLV users' expectations when it comes to usability, accessibility, trust and equity issues regarding digital data privacy.
arXiv Detail & Related papers (2023-10-12T19:51:31Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - PEAK: Explainable Privacy Assistant through Automated Knowledge
Extraction [1.0609815608017064]
This paper presents a privacy assistant for generating explanations for privacy decisions.
The generated explanations can be used by users to understand the recommendations of the privacy assistant.
We show how this can be realized by incorporating the generated explanations into a state-of-the-art privacy assistant.
arXiv Detail & Related papers (2023-01-05T14:25:20Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - "I need a better description'': An Investigation Into User Expectations
For Differential Privacy [31.352325485393074]
We explore users' privacy expectations related to differential privacy.
We find that users care about the kinds of information leaks against which differential privacy protects.
We find that the ways in which differential privacy is described in-the-wild haphazardly set users' privacy expectations.
arXiv Detail & Related papers (2021-10-13T02:36:37Z) - The Challenges and Impact of Privacy Policy Comprehension [0.0]
This paper experimentally manipulated the privacy-friendliness of an unavoidable and simple privacy policy.
Half of our participants miscomprehended even this transparent privacy policy.
To mitigate such pitfalls we present design recommendations to improve the quality of informed consent.
arXiv Detail & Related papers (2020-05-18T14:16:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.