Intent-Aware Permission Architecture: A Model for Rethinking Informed
Consent for Android Apps
- URL: http://arxiv.org/abs/2202.06995v1
- Date: Mon, 14 Feb 2022 19:22:44 GMT
- Title: Intent-Aware Permission Architecture: A Model for Rethinking Informed
Consent for Android Apps
- Authors: Md Rashedur Rahman, Elizabeth Miller, Moinul Hossain and Aisha
Ali-Gombe
- Abstract summary: This paper proposes an unambiguous, informed consent process that provides developers with a standardized method for declaring Intent.
The overarching objective of this model is to ensure end-users are adequately informed before making decisions on their data.
- Score: 3.383670923637874
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As data privacy continues to be a crucial human-right concern as recognized
by the UN, regulatory agencies have demanded developers obtain user permission
before accessing user-sensitive data. Mainly through the use of privacy
policies statements, developers fulfill their legal requirements to keep users
abreast of the requests for their data. In addition, platforms such as Android
enforces explicit permission request using the permission model. Nonetheless,
recent research has shown that service providers hardly make full disclosure
when requesting data in these statements. Neither is the current permission
model designed to provide adequate informed consent. Often users have no clear
understanding of the reason and scope of usage of the data request. This paper
proposes an unambiguous, informed consent process that provides developers with
a standardized method for declaring Intent. Our proposed Intent-aware
permission architecture extends the current Android permission model with a
precise mechanism for full disclosure of purpose and scope limitation. The
design of which is based on an ontology study of data requests purposes. The
overarching objective of this model is to ensure end-users are adequately
informed before making decisions on their data. Additionally, this model has
the potential to improve trust between end-users and developers.
Related papers
- A Comprehensive Analysis of Evolving Permission Usage in Android Apps: Trends, Threats, and Ecosystem Insights [9.172402449557264]
Despite official Android platform documentation on proper permission usage, there are still many cases of permission abuse.<n>This study provides a comprehensive analysis of the Android permission landscape.<n>By distinguishing between benign and malicious applications, we uncover developers' evolving strategies.
arXiv Detail & Related papers (2025-08-04T02:54:10Z) - Controlling What You Share: Assessing Language Model Adherence to Privacy Preferences [80.63946798650653]
We explore how users can stay in control of their data by using privacy profiles.<n>We build a framework where a local model uses these instructions to rewrite queries.<n>To support this research, we introduce a multilingual dataset of real user queries to mark private content.
arXiv Detail & Related papers (2025-07-07T18:22:55Z) - Overcoming the hurdle of legal expertise: A reusable model for smartwatch privacy policies [5.2578340028226425]
Up to now, no conceptual model exists covering privacy statements from different smartwatch manufacturers that is reusable for developers.<n>This paper introduces such a conceptual model for privacy policies of smartwatches and shows its use in a model-driven software engineering approach to create a platform for data visualization.
arXiv Detail & Related papers (2025-05-08T13:09:12Z) - Personalized Language Model Learning on Text Data Without User Identifiers [79.36212347601223]
We propose to let each mobile device maintain a user-specific distribution to dynamically generate user embeddings.
To prevent the cloud from tracking users via uploaded embeddings, the local distributions of different users should either be derived from a linearly dependent space.
Evaluation on both public and industrial datasets reveals a remarkable improvement in accuracy from incorporating anonymous user embeddings.
arXiv Detail & Related papers (2025-01-10T15:46:19Z) - Do Android App Developers Accurately Report Collection of Privacy-Related Data? [5.863391019411233]
European Union's General Protection Regulation requires vendors to faithfully disclose their apps collect data.
Many Android apps use third-party code for same information is not readily available.
We first expose a multi-layered definition of privacy-related data correctly report collection in Android apps.
We then create a dataset of privacy-sensitive data classes that may be used as input by an Android app.
arXiv Detail & Related papers (2024-09-06T10:05:45Z) - Are LLM-based methods good enough for detecting unfair terms of service? [67.49487557224415]
Large language models (LLMs) are good at parsing long text-based documents.
We build a dataset consisting of 12 questions applied individually to a set of privacy policies.
Some open-source models are able to provide a higher accuracy compared to some commercial models.
arXiv Detail & Related papers (2024-08-24T09:26:59Z) - IDPFilter: Mitigating Interdependent Privacy Issues in Third-Party Apps [0.30693357740321775]
Third-party apps have increased concerns about interdependent privacy (IDP)
This paper provides a comprehensive investigation into the previously underinvestigated IDP issues of third-party apps.
We propose IDPFilter, a platform-agnostic API that enables application providers to minimize collateral information collection.
arXiv Detail & Related papers (2024-05-02T16:02:13Z) - Lessons in VCR Repair: Compliance of Android App Developers with the
California Consumer Privacy Act (CCPA) [4.429726534947266]
The California Consumer Privacy Act (CCPA) provides California residents with a range of enhanced privacy protections and rights.
Our research investigated the extent to which Android app developers comply with the provisions of the CCPA.
We compare the actual network traffic of 109 apps that we believe must comply with the CCPA to the data that apps state they collect in their privacy policies.
arXiv Detail & Related papers (2023-04-03T13:02:49Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - I Prefer not to Say: Protecting User Consent in Models with Optional
Personal Data [20.238432971718524]
We show that the decision not to share data can be considered as information in itself that should be protected to respect users' privacy.
We formalize protection requirements for models which only use the information for which active user consent was obtained.
arXiv Detail & Related papers (2022-10-25T12:16:03Z) - Explainable Abuse Detection as Intent Classification and Slot Filling [66.80201541759409]
We introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone.
We show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.
arXiv Detail & Related papers (2022-10-06T03:33:30Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z) - A Fait Accompli? An Empirical Study into the Absence of Consent to
Third-Party Tracking in Android Apps [27.58278290929534]
Third-party tracking allows companies to collect users' behavioural data and track their activity across digital devices.
This can put deep insights into users' private lives into the hands of strangers, and often happens without users' awareness or explicit consent.
This paper investigates whether and to what extent consent is implemented in mobile apps.
arXiv Detail & Related papers (2021-06-17T11:44:49Z) - Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
Dataset Release [52.504589728136615]
We develop a data poisoning method by which publicly released data can be minimally modified to prevent others from train-ing models on it.
We demonstrate the success of our approach onImageNet classification and on facial recognition.
arXiv Detail & Related papers (2021-02-16T19:12:34Z) - Second layer data governance for permissioned blockchains: the privacy
management challenge [58.720142291102135]
In pandemic situations, such as the COVID-19 and Ebola outbreak, the action related to sharing health data is crucial to avoid the massive infection and decrease the number of deaths.
In this sense, permissioned blockchain technology emerges to empower users to get their rights providing data ownership, transparency, and security through an immutable, unified, and distributed database ruled by smart contracts.
arXiv Detail & Related papers (2020-10-22T13:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.