"Sorry for bugging you so much." Exploring Developers' Behavior Towards Privacy-Compliant Implementation
- URL: http://arxiv.org/abs/2504.06697v2
- Date: Thu, 01 May 2025 07:09:25 GMT
- Title: "Sorry for bugging you so much." Exploring Developers' Behavior Towards Privacy-Compliant Implementation
- Authors: Stefan Albert Horstmann, Sandy Hong, David Klein, Raphael Serafini, Martin Degeling, Martin Johns, Veelasha Moonsamy, Alena Naiakshina,
- Abstract summary: We conducted a study with 30 professional software developers on privacy-sensitive programming tasks.<n>None of the 3 tasks were privacy-compliant by all 30 participants.<n>Participants reported severe issues addressing common privacy requirements such as purpose limitation, user consent, or data minimization.
- Score: 7.736621118547534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While protecting user data is essential, software developers often fail to fulfill privacy requirements. However, the reasons why they struggle with privacy-compliant implementation remain unclear. Is it due to a lack of knowledge, or is it because of insufficient support? To provide foundational insights in this field, we conducted a qualitative 5-hour programming study with 30 professional software developers implementing 3 privacy-sensitive programming tasks that were designed with GDPR compliance in mind. To explore if and how developers implement privacy requirements, participants were divided into 3 groups: control, privacy prompted, and privacy expert-supported. After task completion, we conducted follow-up interviews. Alarmingly, almost all participants submitted non-GDPR-compliant solutions (79/90). In particular, none of the 3 tasks were solved privacy-compliant by all 30 participants, with the non-prompted group having the lowest number of 3 out of 30 privacy-compliant solution attempts. Privacy prompting and expert support only slightly improved participants' submissions, with 6/30 and 8/30 privacy-compliant attempts, respectively. In fact, all participants reported severe issues addressing common privacy requirements such as purpose limitation, user consent, or data minimization. Counterintuitively, although most developers exhibited minimal confidence in their solutions, they rarely sought online assistance or contacted the privacy expert, with only 4 out of 10 expert-supported participants explicitly asking for compliance confirmation. Instead, participants often relied on existing implementations and focused on implementing functionality and security first.
Related papers
- MAGPIE: A dataset for Multi-AGent contextual PrIvacy Evaluation [54.410825977390274]
Existing benchmarks to evaluate contextual privacy in LLM-agents primarily assess single-turn, low-complexity tasks.<n>We first present a benchmark - MAGPIE comprising 158 real-life high-stakes scenarios across 15 domains.<n>We then evaluate the current state-of-the-art LLMs on their understanding of contextually private data and their ability to collaborate without violating user privacy.
arXiv Detail & Related papers (2025-06-25T18:04:25Z) - Privacy Bills of Materials: A Transparent Privacy Information Inventory for Collaborative Privacy Notice Generation in Mobile App Development [23.41168782020005]
We introduce PriBOM, a systematic software engineering approach to better capture and coordinate mobile app privacy information.<n>PriBOM facilitates transparency-centric privacy documentation and specific privacy notice creation, enabling traceability and trackability of privacy practices.
arXiv Detail & Related papers (2025-01-02T08:14:52Z) - Multi-P$^2$A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models [65.2761254581209]
We evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source Large Vision-Language Models (LVLMs)
Based on Multi-P$2$A, we evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source LVLMs.
Our results reveal that current LVLMs generally pose a high risk of facilitating privacy breaches.
arXiv Detail & Related papers (2024-12-27T07:33:39Z) - Privacy Leakage Overshadowed by Views of AI: A Study on Human Oversight of Privacy in Language Model Agent [1.5020330976600738]
Language model (LM) agents that act on users' behalf for personal tasks can boost productivity, but are also susceptible to unintended privacy leakage risks.
We present the first study on people's capacity to oversee the privacy implications of the LM agents.
arXiv Detail & Related papers (2024-11-02T19:15:42Z) - A Large-Scale Privacy Assessment of Android Third-Party SDKs [17.245330733308375]
Third-party Software Development Kits (SDKs) are widely adopted in Android app development.
This convenience raises substantial concerns about unauthorized access to users' privacy-sensitive information.
Our study offers a targeted analysis of user privacy protection among Android third-party SDKs.
arXiv Detail & Related papers (2024-09-16T15:44:43Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.<n>We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.<n>State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Evaluating Privacy Perceptions, Experience, and Behavior of Software Development Teams [2.818645620433775]
Our survey includes 362 participants from 23 countries, encompassing roles such as product managers, developers, and testers.
Our results show diverse definitions of privacy across SDLC roles, emphasizing the need for a holistic privacy approach throughout SDLC.
Most participants are more familiar with HIPAA and other regulations, with multi-jurisdictional compliance being their primary concern.
arXiv Detail & Related papers (2024-04-01T17:55:10Z) - Understanding How to Inform Blind and Low-Vision Users about Data Privacy through Privacy Question Answering Assistants [23.94659412932831]
Blind and low-vision (BLV) users face heightened security and privacy risks, but their risk mitigation is often insufficient.
Our study sheds light on BLV users' expectations when it comes to usability, accessibility, trust and equity issues regarding digital data privacy.
arXiv Detail & Related papers (2023-10-12T19:51:31Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.