Investigating Personalisation-Privacy Paradox Among Young Irish
Consumers: A Case of Smart Speakers
- URL: http://arxiv.org/abs/2108.09945v1
- Date: Mon, 23 Aug 2021 05:39:08 GMT
- Title: Investigating Personalisation-Privacy Paradox Among Young Irish
Consumers: A Case of Smart Speakers
- Authors: Caoimhe O'Maonaigh and Deepak Saxena
- Abstract summary: This study investigates the personalisation-privacy paradox in the context of smart speakers.
It suggests a difference between the users and non-users of smart speakers in terms of their perception of privacy risks and corresponding privacy-preserving behaviours.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Personalisation refers to the catering of online services to match consumer's
interests. In order to provide personalised service, companies gather data on
the consumer. In this situation, consumers must navigate a trade-off when they
want the benefits of personalised information and services while simultaneously
wish to protect themselves from privacy risks. However, despite many
individuals claiming that privacy is an essential right to them, they behave
contradictorily in online environments by not engaging in privacy-preserving
behaviours. This paradox is known as the personalisation-privacy Paradox. The
personalisation-privacy paradox has been studied in many different scenarios,
ranging from location-based advertising to online shopping. The objective of
this study is to investigate the personalisation-privacy paradox in the context
of smart speakers. Based on an exploratory study with young Irish consumers,
this study suggests a difference between the users and non-users of smart
speakers in terms of their perception of privacy risks and corresponding
privacy-preserving behaviours. In so doing, it also explains the existence of
the personalisation-privacy paradox and offers insights for further research.
Related papers
- PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - PrivacyCube: Data Physicalization for Enhancing Privacy Awareness in IoT [1.2564343689544843]
We describe PrivacyCube, a novel data physicalization designed to increase privacy awareness within smart home environments.
PrivacyCube visualizes IoT data consumption by displaying privacy-related notices.
Our results show that PrivacyCube helps home occupants comprehend IoT privacy better with significantly increased privacy awareness.
arXiv Detail & Related papers (2024-06-08T12:20:42Z) - PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration [18.11846784025521]
PrivacyRestore is a plug-and-play method to protect the privacy of user inputs during inference.
We create three datasets, covering medical and legal domains, to evaluate the effectiveness of PrivacyRestore.
arXiv Detail & Related papers (2024-06-03T14:57:39Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - A Self-aware Personal Assistant for Making Personalized Privacy
Decisions [3.988307519677766]
This paper proposes a personal assistant that uses deep learning to classify content based on its privacy label.
By factoring in the user's own understanding of privacy, such as risk factors or own labels, the personal assistant can personalize its recommendations per user.
arXiv Detail & Related papers (2022-05-13T10:15:04Z) - The Evolving Path of "the Right to Be Left Alone" - When Privacy Meets
Technology [0.0]
This paper proposes a novel vision of the privacy ecosystem, introducing privacy dimensions, the related users' expectations, the privacy violations, and the changing factors.
We believe that promising approaches to tackle the privacy challenges move in two directions: (i) identification of effective privacy metrics; and (ii) adoption of formal tools to design privacy-compliant applications.
arXiv Detail & Related papers (2021-11-24T11:27:55Z) - User Perception of Privacy with Ubiquitous Devices [5.33024001730262]
This study aims to explore and discover various concerns related to perception of privacy in this era of ubiquitous technologies.
Key themes like attitude towards privacy in public and private spaces, privacy awareness, consent seeking, dilemmas/confusions related to various technologies, impact of attitude and beliefs on individuals actions regarding how to protect oneself from invasion of privacy in both public and private spaces.
arXiv Detail & Related papers (2021-07-23T05:01:44Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.