Privacy-Aware Adversarial Network in Human Mobility Prediction
- URL: http://arxiv.org/abs/2208.05009v1
- Date: Tue, 9 Aug 2022 19:23:13 GMT
- Title: Privacy-Aware Adversarial Network in Human Mobility Prediction
- Authors: Yuting Zhan, Hamed Haddadi, Afra Mashhadi
- Abstract summary: User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications.
We propose an LSTM-based adversarial representation learning to attain a privacy-preserving feature representation of the original geolocated data.
We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility.
- Score: 11.387235721659378
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As mobile devices and location-based services are increasingly developed in
different smart city scenarios and applications, many unexpected privacy
leakages have arisen due to geolocated data collection and sharing. User
re-identification and other sensitive inferences are major privacy threats when
geolocated data are shared with cloud-assisted applications. Significantly,
four spatio-temporal points are enough to uniquely identify 95\% of the
individuals, which exacerbates personal information leakages. To tackle
malicious purposes such as user re-identification, we propose an LSTM-based
adversarial mechanism with representation learning to attain a
privacy-preserving feature representation of the original geolocated data
(i.e., mobility data) for a sharing purpose. These representations aim to
maximally reduce the chance of user re-identification and full data
reconstruction with a minimal utility budget (i.e., loss). We train the
mechanism by quantifying privacy-utility trade-off of mobility datasets in
terms of trajectory reconstruction risk, user re-identification risk, and
mobility predictability. We report an exploratory analysis that enables the
user to assess this trade-off with a specific loss function and its weight
parameters. The extensive comparison results on four representative mobility
datasets demonstrate the superiority of our proposed architecture in mobility
privacy protection and the efficiency of the proposed privacy-preserving
features extractor. We show that the privacy of mobility traces attains decent
protection at the cost of marginal mobility utility. Our results also show that
by exploring the Pareto optimal setting, we can simultaneously increase both
privacy (45%) and utility (32%).
Related papers
- Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - Measuring Privacy Loss in Distributed Spatio-Temporal Data [26.891854386652266]
We propose an alternative privacy loss against location reconstruction attacks by an informed adversary.
Our experiments on real and synthetic data demonstrate that our privacy loss better reflects our intuitions on individual privacy violation in the distributed setting.
arXiv Detail & Related papers (2024-02-18T09:53:14Z) - Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation [20.526071564917274]
Mobility data can be used to build machine learning (ML) models for location-based services (LBS)
However, the convenience comes with the risk of privacy leakage since this type of data might contain sensitive information related to user identities, such as home/work locations.
We design a privacy attack suite containing data extraction and membership inference attacks tailored for point-of-interest (POI) recommendation models.
arXiv Detail & Related papers (2023-10-28T06:17:52Z) - Where you go is who you are -- A study on machine learning based
semantic privacy attacks [3.259843027596329]
We present a systematic analysis of two attack scenarios, namely location categorization and user profiling.
Experiments on the Foursquare dataset and tracking data demonstrate the potential for abuse of high-quality spatial information.
Our findings point out the risks of ever-growing databases of tracking data and spatial context data.
arXiv Detail & Related papers (2023-10-26T17:56:50Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - A Trajectory K-Anonymity Model Based on Point Density and Partition [0.0]
This paper develops a trajectory K-anonymity model based on Point Density and Partition (K PDP)
It successfully resists re-identification attacks and reduces the data utility loss of the k-anonymized dataset.
arXiv Detail & Related papers (2023-07-31T17:10:56Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - DP2-Pub: Differentially Private High-Dimensional Data Publication with
Invariant Post Randomization [58.155151571362914]
We propose a differentially private high-dimensional data publication mechanism (DP2-Pub) that runs in two phases.
splitting attributes into several low-dimensional clusters with high intra-cluster cohesion and low inter-cluster coupling helps obtain a reasonable privacy budget.
We also extend our DP2-Pub mechanism to the scenario with a semi-honest server which satisfies local differential privacy.
arXiv Detail & Related papers (2022-08-24T17:52:43Z) - Privacy-Aware Human Mobility Prediction via Adversarial Networks [10.131895986034314]
We implement a novel LSTM-based adversarial mechanism with representation learning to attain a privacy-preserving feature representation of the original geolocated data (mobility data) for a sharing purpose.
We quantify the utility-privacy trade-off of mobility datasets in terms of trajectory reconstruction risk, user re-identification risk, and mobility predictability.
arXiv Detail & Related papers (2022-01-19T10:41:10Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.