Unpacking Privacy Labels: A Measurement and Developer Perspective on
Google's Data Safety Section
- URL: http://arxiv.org/abs/2306.08111v1
- Date: Tue, 13 Jun 2023 20:01:08 GMT
- Title: Unpacking Privacy Labels: A Measurement and Developer Perspective on
Google's Data Safety Section
- Authors: Rishabh Khandelwal, Asmit Nayak, Paul Chung, and Kassem Fawaz
- Abstract summary: We present a comprehensive analysis of Google's Data Safety Section (DSS) using both quantitative and qualitative methods.
We find that there are internal inconsistencies within the reported practices.
Next, we conduct a longitudinal study of DSS to explore how the reported practices evolve over time.
- Score: 23.183167991569352
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Google has mandated developers to use Data Safety Sections (DSS) to increase
transparency in data collection and sharing practices. In this paper, we
present a comprehensive analysis of Google's Data Safety Section (DSS) using
both quantitative and qualitative methods. We conduct the first large-scale
measurement study of DSS using apps from Android Play store (n=1.1M). We find
that there are internal inconsistencies within the reported practices. We also
find trends of both over and under-reporting practices in the DSSs.
Next, we conduct a longitudinal study of DSS to explore how the reported
practices evolve over time, and find that the developers are still adjusting
their practices. To contextualize these findings, we conduct a developer study,
uncovering the process that app developers undergo when working with DSS. We
highlight the challenges faced and strategies employed by developers for DSS
submission, and the factors contributing to changes in the DSS. Our research
contributes valuable insights into the complexities of implementing and
maintaining privacy labels, underlining the need for better resources, tools,
and guidelines to aid developers. This understanding is crucial as the accuracy
and reliability of privacy labels directly impact their effectiveness.
Related papers
- Challenges in Android Data Disclosure: An Empirical Study [7.011407021531348]
This paper employs an empirical approach to understand developers' experience with Google Play Store's Data Safety Section (DSS) form.<n>We first survey 41 Android developers to understand how they categorize privacy-related data into DSS categories.<n>We complement the survey with an analysis of 172 online developer discussions, capturing the perspectives of 642 additional developers.
arXiv Detail & Related papers (2026-01-28T10:33:38Z) - DRBench: A Realistic Benchmark for Enterprise Deep Research [81.49694432639406]
DRBench is a benchmark for evaluating AI agents on complex, open-ended deep research tasks in enterprise settings.<n>We release 15 deep research tasks across 10 domains, such as Sales, Cybersecurity, and Compliance.
arXiv Detail & Related papers (2025-09-30T18:47:20Z) - Visualizing Privacy-Relevant Data Flows in Android Applications [5.367301239087641]
SliceViz is a tool that analyzes an Android app by slicing all privacy-relevant data sources detected in source code on the back-end.
We conducted a user study with 12 participants demonstrating that SliceViz effectively aids developers in identifying privacy-relevant properties in Android apps.
arXiv Detail & Related papers (2025-03-20T18:47:02Z) - SoK: Usability Studies in Differential Privacy [3.2703125808871247]
Differential Privacy (DP) has emerged as a pivotal approach for safeguarding individual privacy in data analysis.<n>This paper presents a comprehensive systematization of existing research studies around the usability of DP.
arXiv Detail & Related papers (2024-12-22T02:21:57Z) - PASTA-4-PHT: A Pipeline for Automated Security and Technical Audits for the Personal Health Train [34.203290179252555]
This work discusses a PHT-aligned security and audit pipeline inspired by DevSecOps principles.
We introduce vulnerabilities into a PHT and apply our pipeline to five real-world PHTs, which have been utilised in real-world studies.
Ultimately, our work contributes to an increased security and overall transparency of data processing activities within the PHT framework.
arXiv Detail & Related papers (2024-12-02T08:43:40Z) - A Large-Scale Privacy Assessment of Android Third-Party SDKs [17.245330733308375]
Third-party Software Development Kits (SDKs) are widely adopted in Android app development.
This convenience raises substantial concerns about unauthorized access to users' privacy-sensitive information.
Our study offers a targeted analysis of user privacy protection among Android third-party SDKs.
arXiv Detail & Related papers (2024-09-16T15:44:43Z) - LLM-PBE: Assessing Data Privacy in Large Language Models [111.58198436835036]
Large Language Models (LLMs) have become integral to numerous domains, significantly advancing applications in data management, mining, and analysis.
Despite the critical nature of this issue, there has been no existing literature to offer a comprehensive assessment of data privacy risks in LLMs.
Our paper introduces LLM-PBE, a toolkit crafted specifically for the systematic evaluation of data privacy risks in LLMs.
arXiv Detail & Related papers (2024-08-23T01:37:29Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Enhancing Reasoning Capacity of SLM using Cognitive Enhancement [0.0]
Large Language Models (LLMs) have been applied to automate cyber security activities and processes including cyber investigation and digital forensics.
This paper aims to mitigate performance reduction through the integration of cognitive strategies that humans use for problem-solving.
arXiv Detail & Related papers (2024-04-01T14:29:58Z) - Toward an Android Static Analysis Approach for Data Protection [7.785051236155595]
This paper motivates the need to explain data protection in Android apps.
The data analysis will recognize personal data sources in the source code.
App developers can then address key questions about data manipulation and data manipulation derived data.
arXiv Detail & Related papers (2024-02-12T18:52:39Z) - What Can Self-Admitted Technical Debt Tell Us About Security? A
Mixed-Methods Study [6.286506087629511]
Self-Admitted Technical Debt (SATD)
can be deemed as dreadful sources of information on potentially exploitable vulnerabilities and security flaws.
This work investigates the security implications of SATD from a technical and developer-centred perspective.
arXiv Detail & Related papers (2024-01-23T13:48:49Z) - Robust Recommender System: A Survey and Future Directions [58.87305602959857]
We first present a taxonomy to organize current techniques for withstanding malicious attacks and natural noise.
We then explore state-of-the-art methods in each category, including fraudster detection, adversarial training, certifiable robust training for defending against malicious attacks.
We discuss robustness across varying recommendation scenarios and its interplay with other properties like accuracy, interpretability, privacy, and fairness.
arXiv Detail & Related papers (2023-09-05T08:58:46Z) - SoK: Privacy-Preserving Data Synthesis [72.92263073534899]
This paper focuses on privacy-preserving data synthesis (PPDS) by providing a comprehensive overview, analysis, and discussion of the field.
We put forth a master recipe that unifies two prominent strands of research in PPDS: statistical methods and deep learning (DL)-based methods.
arXiv Detail & Related papers (2023-07-05T08:29:31Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Navigating the challenges in creating complex data systems: a
development philosophy [0.0]
Perverse incentives and a lack of widespread software engineering skills are among many root causes.
We advocate two key development philosophies, namely that one should incrementally grow -- not biphasically plan and build -- DSSs.
arXiv Detail & Related papers (2022-10-21T14:28:53Z) - Black-box Dataset Ownership Verification via Backdoor Watermarking [67.69308278379957]
We formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model.
We propose to embed external patterns via backdoor watermarking for the ownership verification to protect them.
Specifically, we exploit poison-only backdoor attacks ($e.g.$, BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification.
arXiv Detail & Related papers (2022-08-04T05:32:20Z) - Data Mining with Big Data in Intrusion Detection Systems: A Systematic
Literature Review [68.15472610671748]
Cloud computing has become a powerful and indispensable technology for complex, high performance and scalable computation.
The rapid rate and volume of data creation has begun to pose significant challenges for data management and security.
The design and deployment of intrusion detection systems (IDS) in the big data setting has, therefore, become a topic of importance.
arXiv Detail & Related papers (2020-05-23T20:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.