Privy: Envisioning and Mitigating Privacy Risks for Consumer-facing AI Product Concepts
- URL: http://arxiv.org/abs/2509.23525v1
- Date: Sat, 27 Sep 2025 23:08:24 GMT
- Title: Privy: Envisioning and Mitigating Privacy Risks for Consumer-facing AI Product Concepts
- Authors: Hao-Ping Lee, Yu-Ju Yang, Matthew Bilik, Isadora Krsek, Thomas Serban von Davier, Kyzyl Monteiro, Jason Lin, Shivani Agarwal, Jodi Forlizzi, Sauvik Das,
- Abstract summary: AI creates and exacerbates privacy risks, yet practitioners lack effective resources to identify and mitigate these risks.<n>We present Privy, a tool that guides practitioners through structured privacy impact assessments.
- Score: 15.368491131846461
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: AI creates and exacerbates privacy risks, yet practitioners lack effective resources to identify and mitigate these risks. We present Privy, a tool that guides practitioners through structured privacy impact assessments to: (i) identify relevant risks in novel AI product concepts, and (ii) propose appropriate mitigations. Privy was shaped by a formative study with 11 practitioners, which informed two versions -- one LLM-powered, the other template-based. We evaluated these two versions of Privy through a between-subjects, controlled study with 24 separate practitioners, whose assessments were reviewed by 13 independent privacy experts. Results show that Privy helps practitioners produce privacy assessments that experts deemed high quality: practitioners identified relevant risks and proposed appropriate mitigation strategies. These effects were augmented in the LLM-powered version. Practitioners themselves rated Privy as being useful and usable, and their feedback illustrates how it helps overcome long-standing awareness, motivation, and ability barriers in privacy work.
Related papers
- Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - PrivacyMotiv: Speculative Persona Journeys for Empathic and Motivating Privacy Reviews in UX Design [21.44559961372506]
PrivacyMotiv generates speculative personas with UX user journeys centered on individuals vulnerable to privacy risks.<n>In a study with professional UX practitioners, we show significant improvements in empathy, intrinsic motivation, and perceived usefulness.<n>This work contributes a promising privacy review approach which addresses the motivational barriers in privacy-aware UX.
arXiv Detail & Related papers (2025-10-03T23:14:22Z) - Differential Privacy in Machine Learning: From Symbolic AI to LLMs [49.1574468325115]
Differential privacy provides a formal framework to mitigate privacy risks.<n>It ensures that the inclusion or exclusion of any single data point does not significantly alter the output of an algorithm.
arXiv Detail & Related papers (2025-06-13T11:30:35Z) - PrivaCI-Bench: Evaluating Privacy with Contextual Integrity and Legal Compliance [44.287734754038254]
We present PrivaCI-Bench, a contextual privacy evaluation benchmark for generative large language models (LLMs)<n>We evaluate the latest LLMs, including the recent reasoner models QwQ-32B and Deepseek R1.<n>Our experimental results suggest that though LLMs can effectively capture key CI parameters inside a given context, they still require further advancements for privacy compliance.
arXiv Detail & Related papers (2025-02-24T10:49:34Z) - Real-Time Privacy Risk Measurement with Privacy Tokens for Gradient Leakage [15.700803673467641]
Deep learning models in privacy-sensitive domains have amplified concerns regarding privacy risks.<n>We propose the concept of privacy tokens, which are derived directly from private gradients during training.<n>Privacy tokens offer valuable insights into the extent of private information leakage from training data.<n>We employ Mutual Information (MI) as a robust metric to quantify the relationship between training data and gradients.
arXiv Detail & Related papers (2025-02-05T06:20:20Z) - Multi-P$^2$A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models [65.2761254581209]
We evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source Large Vision-Language Models (LVLMs)<n>Based on Multi-P$2$A, we evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source LVLMs.<n>Our results reveal that current LVLMs generally pose a high risk of facilitating privacy breaches.
arXiv Detail & Related papers (2024-12-27T07:33:39Z) - How Private are Language Models in Abstractive Summarization? [36.801842863853715]
In sensitive domains such as medical and legal, protecting sensitive information is critical.<n>This poses challenges for sharing valuable data such as medical reports and legal cases summaries.<n>It is still an open question to what extent they can provide privacy-preserving summaries from non-private source documents.
arXiv Detail & Related papers (2024-12-16T18:08:22Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.<n>We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.<n>State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Technocracy, pseudoscience and performative compliance: the risks of
privacy risk assessments. Lessons from NIST's Privacy Risk Assessment
Methodology [0.0]
Privacy risk assessments have been touted as an objective, principled way to encourage organizations to implement privacy-by-design.
Existing guidelines and methods remain vague, and there is little empirical evidence on privacy harms.
We highlight the limitations and pitfalls of what is essentially a utilitarian and technocratic approach.
arXiv Detail & Related papers (2023-08-24T01:32:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.