PrivacyMotiv: Speculative Persona Journeys for Empathic and Motivating Privacy Reviews in UX Design
- URL: http://arxiv.org/abs/2510.03559v1
- Date: Fri, 03 Oct 2025 23:14:22 GMT
- Title: PrivacyMotiv: Speculative Persona Journeys for Empathic and Motivating Privacy Reviews in UX Design
- Authors: Zeya Chen, Jianing Wen, Ruth Schmidt, Yaxing Yao, Toby Jia-Jun Li, Tianshi Li,
- Abstract summary: PrivacyMotiv generates speculative personas with UX user journeys centered on individuals vulnerable to privacy risks.<n>In a study with professional UX practitioners, we show significant improvements in empathy, intrinsic motivation, and perceived usefulness.<n>This work contributes a promising privacy review approach which addresses the motivational barriers in privacy-aware UX.
- Score: 21.44559961372506
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: UX professionals routinely conduct design reviews, yet privacy concerns are often overlooked -- not only due to limited tools, but more critically because of low intrinsic motivation. Limited privacy knowledge, weak empathy for unexpectedly affected users, and low confidence in identifying harms make it difficult to address risks. We present PrivacyMotiv, an LLM-powered system that supports privacy-oriented design diagnosis by generating speculative personas with UX user journeys centered on individuals vulnerable to privacy risks. Drawing on narrative strategies, the system constructs relatable and attention-drawing scenarios that show how ordinary design choices may cause unintended harms, expanding the scope of privacy reflection in UX. In a within-subjects study with professional UX practitioners (N=16), we compared participants' self-proposed methods with PrivacyMotiv across two privacy review tasks. Results show significant improvements in empathy, intrinsic motivation, and perceived usefulness. This work contributes a promising privacy review approach which addresses the motivational barriers in privacy-aware UX.
Related papers
- Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - PrivacyReasoner: Can LLM Emulate a Human-like Privacy Mind? [13.499949825312797]
This paper introduces PRA, an AI-agent design for simulating how individual users form privacy concerns in response to real-world news.<n>Experiments on real-world Hacker News discussions show that PRA outperforms baseline agents in privacy concern prediction.
arXiv Detail & Related papers (2026-01-14T04:47:06Z) - Privy: Envisioning and Mitigating Privacy Risks for Consumer-facing AI Product Concepts [15.368491131846461]
AI creates and exacerbates privacy risks, yet practitioners lack effective resources to identify and mitigate these risks.<n>We present Privy, a tool that guides practitioners through structured privacy impact assessments.
arXiv Detail & Related papers (2025-09-27T23:08:24Z) - Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents [33.26308626066122]
We characterize the notion of contextual privacy for user interactions with Conversational Agents (LCAs)<n>It aims to minimize privacy risks by ensuring that users (sender) disclose only information that is both relevant and necessary for achieving their intended goals.<n>We propose a locally deployable framework that operates between users and LCAs, identifying and reformulating out-of-context information in user prompts.
arXiv Detail & Related papers (2025-02-22T09:05:39Z) - Privacy Leakage Overshadowed by Views of AI: A Study on Human Oversight of Privacy in Language Model Agent [3.89966779727297]
Language model (LM) agents that act on users' behalf for personal tasks can boost productivity, but are also susceptible to unintended privacy leakage risks.<n>We present the first study on people's capacity to oversee the privacy implications of the LM agents.
arXiv Detail & Related papers (2024-11-02T19:15:42Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.<n>We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.<n>State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Privacy Intelligence: A Survey on Image Privacy in Online Social Networks [20.778221889102866]
Recent image leaks from popular OSN services and the abuse of personal photos have prompted the public to rethink individual privacy needs in OSN image sharing.<n>A more intelligent environment for privacy-friendly OSN image sharing is in demand.<n>We present a definition and a taxonomy of OSN image privacy, and a high-level privacy analysis framework based on the lifecycle of OSN image sharing.
arXiv Detail & Related papers (2020-08-27T15:52:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.