Title: {A New Hope}: Contextual Privacy Policies for Mobile Applications and An
Approach Toward Automated Generation
Authors: Shidong Pan, Zhen Tao, Thong Hoang, Dawen Zhang, Tianshi Li, Zhenchang
Xing, Sherry Xu, Mark Staples, Thierry Rakotoarivelo, David Lo
Abstract summary: The aim of contextual privacy policies ( CPPs) is to fragment privacy policies into concise snippets, displaying them only within the corresponding contexts within the application's graphical user interfaces (GUIs)
In this paper, we first formulate CPP in mobile application scenario, and then present a novel multimodal framework, named SeePrivacy, specifically designed to automatically generate CPPs for mobile applications.
A human evaluation shows that 77% of the extracted privacy policy segments were perceived as well-aligned with the detected contexts.
Abstract: Privacy policies have emerged as the predominant approach to conveying
privacy notices to mobile application users. In an effort to enhance both
readability and user engagement, the concept of contextual privacy policies
(CPPs) has been proposed by researchers. The aim of CPPs is to fragment privacy
policies into concise snippets, displaying them only within the corresponding
contexts within the application's graphical user interfaces (GUIs). In this
paper, we first formulate CPP in mobile application scenario, and then present
a novel multimodal framework, named SeePrivacy, specifically designed to
automatically generate CPPs for mobile applications. This method uniquely
integrates vision-based GUI understanding with privacy policy analysis,
achieving 0.88 precision and 0.90 recall to detect contexts, as well as 0.98
precision and 0.96 recall in extracting corresponding policy segments. A human
evaluation shows that 77% of the extracted privacy policy segments were
perceived as well-aligned with the detected contexts. These findings suggest
that SeePrivacy could serve as a significant tool for bolstering user
interaction with, and understanding of, privacy policies. Furthermore, our
solution has the potential to make privacy notices more accessible and
inclusive, thus appealing to a broader demographic. A demonstration of our work
can be accessed at https://cpp4app.github.io/SeePrivacy/
Related papers
PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771] PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions. arXivDetail & Related papers (2024-08-29T17:58:38Z)
Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233] differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users. arXivDetail & Related papers (2024-06-20T13:54:32Z)
PolicyGPT: Automated Analysis of Privacy Policies with Large Language
Models [41.969546784168905] In practical use, users tend to click the Agree button directly rather than reading them carefully.
This practice exposes users to risks of privacy leakage and legal issues.
Recently, the advent of Large Language Models (LLM) such as ChatGPT and GPT-4 has opened new possibilities for text analysis. arXivDetail & Related papers (2023-09-19T01:22:42Z)
SeePrivacy: Automated Contextual Privacy Policy Generation for Mobile
Applications [21.186902172367173] SeePrivacy is designed to automatically generate contextual privacy policies for mobile apps.
Our method synergistically combines mobile GUI understanding and privacy policy document analysis.
96% of the retrieved policy segments can be correctly matched with their contexts. arXivDetail & Related papers (2023-07-04T12:52:45Z)
Toward the Cure of Privacy Policy Reading Phobia: Automated Generation
of Privacy Nutrition Labels From Privacy Policies [19.180437130066323] We propose the first framework that can automatically generate privacy nutrition labels from privacy policies.
Based on our ground truth applications about the Data Safety Report from the Google Play app store, our framework achieves a 0.75 F1-score on generating first-party data collection practices.
We also analyse the inconsistencies between ground truth and curated privacy nutrition labels on the market, and our framework can detect 90.1% under-claim issues. arXivDetail & Related papers (2023-06-19T13:33:44Z)
Is It a Trap? A Large-scale Empirical Study And Comprehensive Assessment
of Online Automated Privacy Policy Generators for Mobile Apps [15.181098379077344] Automated Privacy Policy Generators can create privacy policies for mobile apps.
Nearly 20.1% of privacy policies could be generated by existing APPGs.
App developers must carefully select and use the appropriate APPGs to avoid potential pitfalls. arXivDetail & Related papers (2023-05-05T04:08:18Z)
SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409] Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels. arXivDetail & Related papers (2022-03-29T02:56:40Z)
Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446] We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy. arXivDetail & Related papers (2021-12-11T15:23:28Z)
Is Downloading this App Consistent with my Values? Conceptualizing a
Value-Centered Privacy Assistant [0.0] I propose that data privacy decisions can be understood as an expression of user values.
I further propose the creation of a value-centered privacy assistant (VcPA) arXivDetail & Related papers (2021-06-23T15:08:58Z)
Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817] We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee. arXivDetail & Related papers (2020-09-18T20:18:35Z)
PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014] We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy. arXivDetail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.