Leveraging Privacy Profiles to Empower Users in the Digital Society
- URL: http://arxiv.org/abs/2204.00011v1
- Date: Fri, 1 Apr 2022 15:31:50 GMT
- Title: Leveraging Privacy Profiles to Empower Users in the Digital Society
- Authors: Davide Di Ruscio, Paola Inverardi, Patrizio Migliarini, Phuong T.
Nguyen
- Abstract summary: Privacy and ethics of citizens are at the core of the concerns raised by our increasingly digital society.
We focus on the privacy dimension and contribute a step in the above direction through an empirical study on an existing dataset collected from the fitness domain.
The results reveal that a compact set of semantic-driven questions helps distinguish users better than a complex domain-dependent one.
- Score: 7.350403786094707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy and ethics of citizens are at the core of the concerns raised by our
increasingly digital society. Profiling users is standard practice for software
applications triggering the need for users, also enforced by laws, to properly
manage privacy settings. Users need to manage software privacy settings
properly to protect personally identifiable information and express personal
ethical preferences. AI technologies that empower users to interact with the
digital world by reflecting their personal ethical preferences can be key
enablers of a trustworthy digital society. We focus on the privacy dimension
and contribute a step in the above direction through an empirical study on an
existing dataset collected from the fitness domain. We find out which set of
questions is appropriate to differentiate users according to their preferences.
The results reveal that a compact set of semantic-driven questions (about
domain-independent privacy preferences) helps distinguish users better than a
complex domain-dependent one. This confirms the study's hypothesis that moral
attitudes are the relevant piece of information to collect. Based on the
outcome, we implement a recommender system to provide users with suitable
recommendations related to privacy choices. We then show that the proposed
recommender system provides relevant settings to users, obtaining high
accuracy.
Related papers
- A Deep Dive into Fairness, Bias, Threats, and Privacy in Recommender Systems: Insights and Future Research [45.86892639035389]
This study explores fairness, bias, threats, and privacy in recommender systems.
It examines how algorithmic decisions can unintentionally reinforce biases or marginalize specific user and item groups.
The study suggests future research directions to improve recommender systems' robustness, fairness, and privacy.
arXiv Detail & Related papers (2024-09-19T11:00:35Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - Tapping into Privacy: A Study of User Preferences and Concerns on
Trigger-Action Platforms [0.0]
The Internet of Things (IoT) devices are rapidly increasing in popularity, with more individuals using Internet-connected devices that continuously monitor their activities.
This work explores privacy concerns and expectations of end-users related to Trigger-Action platforms (TAPs) in the context of the Internet of Things (IoT)
TAPs allow users to customize their smart environments by creating rules that trigger actions based on specific events or conditions.
arXiv Detail & Related papers (2023-08-11T14:25:01Z) - Randomized algorithms for precise measurement of differentially-private,
personalized recommendations [6.793345945003182]
We propose an algorithm for personalized recommendations that facilitates both precise and differentially-private measurement.
We conduct offline experiments to quantify how the proposed privacy-preserving algorithm affects key metrics related to user experience, advertiser value, and platform revenue.
arXiv Detail & Related papers (2023-08-07T17:34:58Z) - Systematic Review on Privacy Categorization [1.5377372227901214]
This work aims to present a systematic review of the literature on privacy categorization.
Privacy categorization involves the possibility to classify users according to specific prerequisites.
arXiv Detail & Related papers (2023-07-07T15:18:26Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - How Much User Context Do We Need? Privacy by Design in Mental Health NLP
Application [33.3172788815152]
Clinical tasks such as mental health assessment from text must take social constraints into account.
We present first analysis juxtaposing user history length and differential privacy budgets and elaborate how modeling additional user context enables utility preservation.
arXiv Detail & Related papers (2022-09-05T15:41:45Z) - A Self-aware Personal Assistant for Making Personalized Privacy
Decisions [3.988307519677766]
This paper proposes a personal assistant that uses deep learning to classify content based on its privacy label.
By factoring in the user's own understanding of privacy, such as risk factors or own labels, the personal assistant can personalize its recommendations per user.
arXiv Detail & Related papers (2022-05-13T10:15:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.