Is It a Trap? A Large-scale Empirical Study And Comprehensive Assessment
of Online Automated Privacy Policy Generators for Mobile Apps
- URL: http://arxiv.org/abs/2305.03271v2
- Date: Sat, 23 Sep 2023 11:05:23 GMT
- Title: Is It a Trap? A Large-scale Empirical Study And Comprehensive Assessment
of Online Automated Privacy Policy Generators for Mobile Apps
- Authors: Shidong Pan, Dawen Zhang, Mark Staples, Zhenchang Xing, Jieshan Chen,
Xiwei Xu, and James Hoang
- Abstract summary: Automated Privacy Policy Generators can create privacy policies for mobile apps.
Nearly 20.1% of privacy policies could be generated by existing APPGs.
App developers must carefully select and use the appropriate APPGs to avoid potential pitfalls.
- Score: 15.181098379077344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy regulations protect and promote the privacy of individuals by
requiring mobile apps to provide a privacy policy that explains what personal
information is collected and how these apps process this information. However,
developers often do not have sufficient legal knowledge to create such privacy
policies. Online Automated Privacy Policy Generators (APPGs) can create privacy
policies, but their quality and other characteristics can vary. In this paper,
we conduct the first large-scale empirical study and comprehensive assessment
of APPGs for mobile apps. Specifically, we scrutinize 10 APPGs on multiple
dimensions. We further perform the market penetration analysis by collecting
46,472 Android app privacy policies from Google Play, discovering that nearly
20.1% of privacy policies could be generated by existing APPGs. Lastly, we
point out that generated policies in our study do not fully comply with GDPR,
CCPA, or LGPD. In summary, app developers must carefully select and use the
appropriate APPGs with careful consideration to avoid potential pitfalls.
Related papers
- Interactive GDPR-Compliant Privacy Policy Generation for Software Applications [6.189770781546807]
To use software applications users are sometimes requested to provide their personal information.
As privacy has become a significant concern many protection regulations exist worldwide.
We propose an approach that generates comprehensive and compliant privacy policy.
arXiv Detail & Related papers (2024-10-04T01:22:16Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Toward the Cure of Privacy Policy Reading Phobia: Automated Generation
of Privacy Nutrition Labels From Privacy Policies [19.180437130066323]
We propose the first framework that can automatically generate privacy nutrition labels from privacy policies.
Based on our ground truth applications about the Data Safety Report from the Google Play app store, our framework achieves a 0.75 F1-score on generating first-party data collection practices.
We also analyse the inconsistencies between ground truth and curated privacy nutrition labels on the market, and our framework can detect 90.1% under-claim issues.
arXiv Detail & Related papers (2023-06-19T13:33:44Z) - ATLAS: Automatically Detecting Discrepancies Between Privacy Policies
and Privacy Labels [2.457872341625575]
We introduce the Automated Privacy Label Analysis System (ATLAS)
ATLAS identifies possible discrepancies between mobile app privacy policies and their privacy labels.
We find that, on average, apps have 5.32 such potential compliance issues.
arXiv Detail & Related papers (2023-05-24T05:27:22Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Analysis of Longitudinal Changes in Privacy Behavior of Android
Applications [79.71330613821037]
In this paper, we examine the trends in how Android apps have changed over time with respect to privacy.
We examine the adoption of HTTPS, whether apps scan the device for other installed apps, the use of permissions for privacy-sensitive data, and the use of unique identifiers.
We find that privacy-related behavior has improved with time as apps continue to receive updates, and that the third-party libraries used by apps are responsible for more issues with privacy.
arXiv Detail & Related papers (2021-12-28T16:21:31Z) - Tracking in apps' privacy policies [3.8073142980733]
We analysed privacy policies from 26,910 mobile apps in May 2019.
52 developers of apps did not provide privacy policy and asked them about data practices.
Despite being legally required to answer such queries, 12 developers (23%) failed to respond.
arXiv Detail & Related papers (2021-11-15T16:03:59Z) - Automated Detection of GDPR Disclosure Requirements in Privacy Policies
using Deep Active Learning [3.659023646021795]
Most privacy policies are verbose, full of jargon, and vaguely describe companies' data practices and users' rights.
In this paper, we create a privacy policy dataset of 1,080 websites labeled with the 18 requirements.
We develop a Convolutional Network (CNN) based model which can classify the privacy policies with an accuracy of 89.2%.
arXiv Detail & Related papers (2021-11-08T01:28:27Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.