A Self-aware Personal Assistant for Making Personalized Privacy
Decisions
- URL: http://arxiv.org/abs/2205.06544v1
- Date: Fri, 13 May 2022 10:15:04 GMT
- Title: A Self-aware Personal Assistant for Making Personalized Privacy
Decisions
- Authors: Gonul Ayci, Murat Sensoy, Arzucan \"Ozg\"Ur, Pinar Yolum
- Abstract summary: This paper proposes a personal assistant that uses deep learning to classify content based on its privacy label.
By factoring in the user's own understanding of privacy, such as risk factors or own labels, the personal assistant can personalize its recommendations per user.
- Score: 3.988307519677766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many software systems, such as online social networks enable users to share
information about themselves. While the action of sharing is simple, it
requires an elaborate thought process on privacy: what to share, with whom to
share, and for what purposes. Thinking about these for each piece of content to
be shared is tedious. Recent approaches to tackle this problem build personal
assistants that can help users by learning what is private over time and
recommending privacy labels such as private or public to individual content
that a user considers sharing. However, privacy is inherently ambiguous and
highly personal. Existing approaches to recommend privacy decisions do not
address these aspects of privacy sufficiently. Ideally, a personal assistant
should be able to adjust its recommendation based on a given user, considering
that user's privacy understanding. Moreover, the personal assistant should be
able to assess when its recommendation would be uncertain and let the user make
the decision on her own. Accordingly, this paper proposes a personal assistant
that uses evidential deep learning to classify content based on its privacy
label. An important characteristic of the personal assistant is that it can
model its uncertainty in its decisions explicitly, determine that it does not
know the answer, and delegate from making a recommendation when its uncertainty
is high. By factoring in the user's own understanding of privacy, such as risk
factors or own labels, the personal assistant can personalize its
recommendations per user. We evaluate our proposed personal assistant using a
well-known data set. Our results show that our personal assistant can
accurately identify uncertain cases, personalize them to its user's needs, and
thus helps users preserve their privacy well.
Related papers
- Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - Randomized algorithms for precise measurement of differentially-private,
personalized recommendations [6.793345945003182]
We propose an algorithm for personalized recommendations that facilitates both precise and differentially-private measurement.
We conduct offline experiments to quantify how the proposed privacy-preserving algorithm affects key metrics related to user experience, advertiser value, and platform revenue.
arXiv Detail & Related papers (2023-08-07T17:34:58Z) - PEAK: Explainable Privacy Assistant through Automated Knowledge
Extraction [1.0609815608017064]
This paper presents a privacy assistant for generating explanations for privacy decisions.
The generated explanations can be used by users to understand the recommendations of the privacy assistant.
We show how this can be realized by incorporating the generated explanations into a state-of-the-art privacy assistant.
arXiv Detail & Related papers (2023-01-05T14:25:20Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Leveraging Privacy Profiles to Empower Users in the Digital Society [7.350403786094707]
Privacy and ethics of citizens are at the core of the concerns raised by our increasingly digital society.
We focus on the privacy dimension and contribute a step in the above direction through an empirical study on an existing dataset collected from the fitness domain.
The results reveal that a compact set of semantic-driven questions helps distinguish users better than a complex domain-dependent one.
arXiv Detail & Related papers (2022-04-01T15:31:50Z) - "I need a better description'': An Investigation Into User Expectations
For Differential Privacy [31.352325485393074]
We explore users' privacy expectations related to differential privacy.
We find that users care about the kinds of information leaks against which differential privacy protects.
We find that the ways in which differential privacy is described in-the-wild haphazardly set users' privacy expectations.
arXiv Detail & Related papers (2021-10-13T02:36:37Z) - Investigating Personalisation-Privacy Paradox Among Young Irish
Consumers: A Case of Smart Speakers [0.0]
This study investigates the personalisation-privacy paradox in the context of smart speakers.
It suggests a difference between the users and non-users of smart speakers in terms of their perception of privacy risks and corresponding privacy-preserving behaviours.
arXiv Detail & Related papers (2021-08-23T05:39:08Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.