A Value-Centered Exploration of Data Privacy and Personalized Privacy
Assistants
- URL: http://arxiv.org/abs/2212.00528v1
- Date: Thu, 1 Dec 2022 14:26:33 GMT
- Title: A Value-Centered Exploration of Data Privacy and Personalized Privacy
Assistants
- Authors: Sarah E. Carter
- Abstract summary: I suggest instead of utilizing informed consent we could create space for more value-centered user decisions.
I utilize Suzy Killmister's four-dimensional theory of autonomy to operationalize value-centered privacy decisions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the the current post-GDPR landscape, privacy notices have become ever more
prevalent on our phones and online. However, these notices are not well suited
to their purpose of helping users make informed decisions. I suggest that
instead of utilizing notice to eliciting informed consent, we could repurpose
privacy notices to create the space for more meaningful, value-centered user
decisions. Value-centered privacy decisions, or those that accurately reflect
who we are and what we value, encapsulate the intuitive role of personal values
in data privacy decisions. To explore how notices could be repurposed to
support such decisions, I utilize Suzy Killmister's four-dimensional theory of
autonomy (4DT) to operationalize value-centered privacy decisions. I then
assess the degree that an existing technology, Personalized Privacy Assistants
(PPAs), uses notices in a manner that allows for value-centered
decision-making. Lastly, I explore the implications of the PPA assessment for
designing a new assistant, called a value-centered privacy assistant (VcPA). A
VcPA could ideally utilized notice to assists users in value-centered app
selection and in other data privacy decisions.
Related papers
- Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - The Privacy-Value-App Relationship and the Value-Centered Privacy
Assistant [3.885316081594592]
We aim to better understand the relationship between our values, our privacy preferences, and our app choices.
We explore the effectiveness of a smartphone value-centered privacy assistant (VcPA) at promoting value-centered app selection.
arXiv Detail & Related papers (2023-08-10T17:04:12Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - A Self-aware Personal Assistant for Making Personalized Privacy
Decisions [3.988307519677766]
This paper proposes a personal assistant that uses deep learning to classify content based on its privacy label.
By factoring in the user's own understanding of privacy, such as risk factors or own labels, the personal assistant can personalize its recommendations per user.
arXiv Detail & Related papers (2022-05-13T10:15:04Z) - Leveraging Privacy Profiles to Empower Users in the Digital Society [7.350403786094707]
Privacy and ethics of citizens are at the core of the concerns raised by our increasingly digital society.
We focus on the privacy dimension and contribute a step in the above direction through an empirical study on an existing dataset collected from the fitness domain.
The results reveal that a compact set of semantic-driven questions helps distinguish users better than a complex domain-dependent one.
arXiv Detail & Related papers (2022-04-01T15:31:50Z) - Fighting the Fog: Evaluating the Clarity of Privacy Disclosures in the
Age of CCPA [29.56312492076473]
Vagueness and ambiguity in privacy policies threaten the ability of consumers to make informed choices about how businesses collect, use, and share personal information.
The California Consumer Privacy Act (CCPA) of 2018 was intended to provide Californian consumers with more control by mandating that businesses clearly disclose their data practices.
Our results suggest that CCPA's mandates for privacy disclosures, as currently implemented, have not yet yielded the level of clarity they were designed to deliver.
arXiv Detail & Related papers (2021-09-28T15:40:57Z) - Is Downloading this App Consistent with my Values? Conceptualizing a
Value-Centered Privacy Assistant [0.0]
I propose that data privacy decisions can be understood as an expression of user values.
I further propose the creation of a value-centered privacy assistant (VcPA)
arXiv Detail & Related papers (2021-06-23T15:08:58Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.