Collection, usage and privacy of mobility data in the enterprise and public administrations
- URL: http://arxiv.org/abs/2407.03732v1
- Date: Thu, 4 Jul 2024 08:29:27 GMT
- Title: Collection, usage and privacy of mobility data in the enterprise and public administrations
- Authors: Alexandra Kapp,
- Abstract summary: Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
- Score: 55.2480439325792
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human mobility data is a crucial resource for urban mobility management, but it does not come without personal reference. The implementation of security measures such as anonymization is thus needed to protect individuals' privacy. Often, a trade-off arises as such techniques potentially decrease the utility of the data and limit its use. While much research on anonymization techniques exists, there is little information on the actual implementations by practitioners, especially outside the big tech context. Within our study, we conducted expert interviews to gain insights into practices in the field. We categorize purposes, data sources, analysis, and modeling tasks to provide a profound understanding of the context such data is used in. We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy. We provide groundwork for further research on practice-oriented research by identifying privacy needs of practitioners and extracting relevant mobility characteristics for future standardized evaluations of privacy-enhancing methods.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - A Summary of Privacy-Preserving Data Publishing in the Local Setting [0.6749750044497732]
Statistical Disclosure Control aims to minimize the risk of exposing confidential information by de-identifying it.
We outline the current privacy-preserving techniques employed in microdata de-identification, delve into privacy measures tailored for various disclosure scenarios, and assess metrics for information loss and predictive performance.
arXiv Detail & Related papers (2023-12-19T04:23:23Z) - A Cautionary Tale: On the Role of Reference Data in Empirical Privacy
Defenses [12.34501903200183]
We propose a baseline defense that enables the utility-privacy tradeoff with respect to both training and reference data to be easily understood.
Our experiments show that, surprisingly, it outperforms the most well-studied and current state-of-the-art empirical privacy defenses.
arXiv Detail & Related papers (2023-10-18T17:07:07Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - UN Handbook on Privacy-Preserving Computation Techniques [14.63213847614646]
This paper describes privacy-preserving approaches for the statistical analysis of sensitive data.
The information in this document is intended for use by statisticians and data scientists, data curators and architects, IT specialists, and security and information assurance specialists.
arXiv Detail & Related papers (2023-01-15T19:43:40Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Evaluating Privacy-Preserving Machine Learning in Critical
Infrastructures: A Case Study on Time-Series Classification [5.607917328636864]
It is pivotal to ensure that neither the model nor the data can be used to extract sensitive information.
Various safety-critical use cases (mostly relying on time-series data) are currently underrepresented in privacy-related considerations.
By evaluating several privacy-preserving methods regarding their applicability on time-series data, we validated the inefficacy of encryption for deep learning.
arXiv Detail & Related papers (2021-11-29T12:28:22Z) - Utility-aware Privacy-preserving Data Releasing [7.462336024223669]
We propose a two-step perturbation-based privacy-preserving data releasing framework.
First, certain predefined privacy and utility problems are learned from the public domain data.
We then leverage the learned knowledge to precisely perturb the data owners' data into privatized data.
arXiv Detail & Related papers (2020-05-09T05:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.