Tapping into Privacy: A Study of User Preferences and Concerns on
Trigger-Action Platforms
- URL: http://arxiv.org/abs/2308.06148v1
- Date: Fri, 11 Aug 2023 14:25:01 GMT
- Title: Tapping into Privacy: A Study of User Preferences and Concerns on
Trigger-Action Platforms
- Authors: Piero Romare, Victor Morel, Farzaneh Karegar, Simone Fischer-H\"ubner
- Abstract summary: The Internet of Things (IoT) devices are rapidly increasing in popularity, with more individuals using Internet-connected devices that continuously monitor their activities.
This work explores privacy concerns and expectations of end-users related to Trigger-Action platforms (TAPs) in the context of the Internet of Things (IoT)
TAPs allow users to customize their smart environments by creating rules that trigger actions based on specific events or conditions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Internet of Things (IoT) devices are rapidly increasing in popularity,
with more individuals using Internet-connected devices that continuously
monitor their activities. This work explores privacy concerns and expectations
of end-users related to Trigger-Action platforms (TAPs) in the context of the
Internet of Things (IoT). TAPs allow users to customize their smart
environments by creating rules that trigger actions based on specific events or
conditions. As personal data flows between different entities, there is a
potential for privacy concerns. In this study, we aimed to identify the privacy
factors that impact users' concerns and preferences for using IoT TAPs. To
address this research objective, we conducted three focus groups with 15
participants and we extracted nine themes related to privacy factors using
thematic analysis. Our participants particularly prefer to have control and
transparency over the automation and are concerned about unexpected data
inferences, risks and unforeseen consequences for themselves and for bystanders
that are caused by the automation. The identified privacy factors can help
researchers derive predefined and selectable profiles of privacy permission
settings for IoT TAPs that represent the privacy preferences of different types
of users as a basis for designing usable privacy controls for IoT TAPs.
Related papers
- Can Humans Oversee Agents to Prevent Privacy Leakage? A Study on Privacy Awareness, Preferences, and Trust in Language Model Agents [1.5020330976600738]
Language model (LM) agents that act on users' behalf for personal tasks can boost productivity, but are also susceptible to unintended privacy leakage risks.
We present the first study on people's capacity to oversee the privacy implications of the LM agents.
arXiv Detail & Related papers (2024-11-02T19:15:42Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - PriviFy: Designing Tangible Interfaces for Configuring IoT Privacy Preferences [1.4999444543328289]
We introduce PriviFy, a novel and user-friendly tangible interface that can simplify the configuration of smart devices privacy settings.
We envision that positive feedback and user experiences from our study will inspire product developers and smart device manufacturers to incorporate the useful design elements we have identified.
arXiv Detail & Related papers (2024-06-08T12:35:46Z) - IDPFilter: Mitigating Interdependent Privacy Issues in Third-Party Apps [0.30693357740321775]
Third-party apps have increased concerns about interdependent privacy (IDP)
This paper provides a comprehensive investigation into the previously underinvestigated IDP issues of third-party apps.
We propose IDPFilter, a platform-agnostic API that enables application providers to minimize collateral information collection.
arXiv Detail & Related papers (2024-05-02T16:02:13Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Leveraging Privacy Profiles to Empower Users in the Digital Society [7.350403786094707]
Privacy and ethics of citizens are at the core of the concerns raised by our increasingly digital society.
We focus on the privacy dimension and contribute a step in the above direction through an empirical study on an existing dataset collected from the fitness domain.
The results reveal that a compact set of semantic-driven questions helps distinguish users better than a complex domain-dependent one.
arXiv Detail & Related papers (2022-04-01T15:31:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.