Advancing Android Privacy Assessments with Automation
- URL: http://arxiv.org/abs/2409.06564v1
- Date: Tue, 10 Sep 2024 14:56:51 GMT
- Title: Advancing Android Privacy Assessments with Automation
- Authors: Mugdha Khedkar, Michael Schlichtig, Eric Bodden,
- Abstract summary: This paper motivates the need for an automated approach that enhances understanding of data protection in Android apps.
We propose Assessor View, a tool designed to bridge the knowledge gap between these parties facilitating more effective privacy assessments of Android applications.
- Score: 5.863391019411233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Android apps collecting data from users must comply with legal frameworks to ensure data protection. This requirement has become even more important since the implementation of the General Data Protection Regulation (GDPR) by the European Union in 2018. Moreover, with the proposed Cyber Resilience Act on the horizon, stakeholders will soon need to assess software against even more stringent security and privacy standards. Effective privacy assessments require collaboration among groups with diverse expertise to function effectively as a cohesive unit. This paper motivates the need for an automated approach that enhances understanding of data protection in Android apps and improves communication between the various parties involved in privacy assessments. We propose the Assessor View, a tool designed to bridge the knowledge gap between these parties, facilitating more effective privacy assessments of Android applications.
Related papers
- Balancing Innovation and Privacy: Data Security Strategies in Natural Language Processing Applications [3.380276187928269]
This research addresses privacy protection in Natural Language Processing (NLP) by introducing a novel algorithm based on differential privacy.
By introducing a differential privacy mechanism, our model ensures the accuracy and reliability of data analysis results while adding random noise.
The proposed algorithm's efficacy is demonstrated through performance metrics such as accuracy (0.89), precision (0.85), and recall (0.88)
arXiv Detail & Related papers (2024-10-11T06:05:10Z) - Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning [54.30994558765057]
The study pioneers a comprehensive privacy protection framework that safeguards image data privacy concurrently during data sharing and model publication.
We propose an interactive image privacy protection framework that utilizes generative machine learning models to modify image information at the attribute level.
Within this framework, we instantiate two modules: a differential privacy diffusion model for protecting attribute information in images and a feature unlearning algorithm for efficient updates of the trained model on the revised image dataset.
arXiv Detail & Related papers (2024-09-05T07:55:55Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - AI-Driven Anonymization: Protecting Personal Data Privacy While
Leveraging Machine Learning [5.015409508372732]
This paper focuses on personal data privacy protection and the promotion of anonymity as its core research objectives.
It achieves personal data privacy protection and detection through the use of machine learning's differential privacy protection algorithm.
The paper also addresses existing challenges in machine learning related to privacy and personal data protection, offers improvement suggestions, and analyzes factors impacting datasets to enable timely personal data privacy detection and protection.
arXiv Detail & Related papers (2024-02-27T04:12:25Z) - Experts-in-the-Loop: Establishing an Effective Workflow in Crafting
Privacy Q&A [0.0]
We propose a dynamic workflow for transforming privacy policies into privacy question-and-answer (Q&A) pairs.
Thereby, we facilitate interdisciplinary collaboration among legal experts and conversation designers.
Our proposed workflow underscores continuous improvement and monitoring throughout the construction of privacy Q&As.
arXiv Detail & Related papers (2023-11-18T20:32:59Z) - Assessing Mobile Application Privacy: A Quantitative Framework for Privacy Measurement [0.0]
This work aims to contribute to a digital environment that prioritizes privacy, promotes informed decision-making, and endorses the privacy-preserving design principles.
The purpose of this framework is to systematically evaluate the level of privacy risk when using particular Android applications.
arXiv Detail & Related papers (2023-10-31T18:12:19Z) - Advancing Differential Privacy: Where We Are Now and Future Directions for Real-World Deployment [100.1798289103163]
We present a detailed review of current practices and state-of-the-art methodologies in the field of differential privacy (DP)
Key points and high-level contents of the article were originated from the discussions from "Differential Privacy (DP): Challenges Towards the Next Frontier"
This article aims to provide a reference point for the algorithmic and design decisions within the realm of privacy, highlighting important challenges and potential research directions.
arXiv Detail & Related papers (2023-04-14T05:29:18Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - An Example of Privacy and Data Protection Best Practices for Biometrics
Data Processing in Border Control: Lesson Learned from SMILE [0.9442139459221784]
Misuse of data, compromising the privacy of individuals and/or authorized processing of data may be irreversible.
This is partly due to the lack of methods and guidance for the integration of data protection and privacy by design in the system development process.
We present an example of privacy and data protection best practices to provide more guidance for data controllers and developers.
arXiv Detail & Related papers (2022-01-10T15:34:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.