Privacy-Aware Human Mobility Prediction via Adversarial Networks
- URL: http://arxiv.org/abs/2201.07519v1
- Date: Wed, 19 Jan 2022 10:41:10 GMT
- Title: Privacy-Aware Human Mobility Prediction via Adversarial Networks
- Authors: Yuting Zhan, Alex Kyllo, Afra Mashhadi, Hamed Haddadi
- Abstract summary: We implement a novel LSTM-based adversarial mechanism with representation learning to attain a privacy-preserving feature representation of the original geolocated data (mobility data) for a sharing purpose.
We quantify the utility-privacy trade-off of mobility datasets in terms of trajectory reconstruction risk, user re-identification risk, and mobility predictability.
- Score: 10.131895986034314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As various mobile devices and location-based services are increasingly
developed in different smart city scenarios and applications, many unexpected
privacy leakages have arisen due to geolocated data collection and sharing.
While these geolocated data could provide a rich understanding of human
mobility patterns and address various societal research questions, privacy
concerns for users' sensitive information have limited their utilization. In
this paper, we design and implement a novel LSTM-based adversarial mechanism
with representation learning to attain a privacy-preserving feature
representation of the original geolocated data (mobility data) for a sharing
purpose. We quantify the utility-privacy trade-off of mobility datasets in
terms of trajectory reconstruction risk, user re-identification risk, and
mobility predictability. Our proposed architecture reports a Pareto Frontier
analysis that enables the user to assess this trade-off as a function of
Lagrangian loss weight parameters. The extensive comparison results on four
representative mobility datasets demonstrate the superiority of our proposed
architecture and the efficiency of the proposed privacy-preserving features
extractor. Our results show that by exploring Pareto optimal setting, we can
simultaneously increase both privacy (45%) and utility (32%).
Related papers
- Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - FedVAE: Trajectory privacy preserving based on Federated Variational AutoEncoder [30.787270605742883]
Location-Based Services (LBS) capitalize on trajectory data to offer users personalized services tailored to their location information.
To address this challenge, privacy-preserving methods like K-anonymity and Differential Privacy have been proposed to safeguard private information in the dataset.
We propose a Federated Variational AutoEncoder (FedVAE) approach, which effectively generates a new trajectory dataset while preserving the confidentiality of private information and retaining the structure of the original features.
arXiv Detail & Related papers (2024-07-12T13:10:59Z) - Reconsidering utility: unveiling the limitations of synthetic mobility data generation algorithms in real-life scenarios [49.1574468325115]
We evaluate the utility of five state-of-the-art synthesis approaches in terms of real-world applicability.
We focus on so-called trip data that encode fine granular urban movements such as GPS-tracked taxi rides.
One model fails to produce data within reasonable time and another generates too many jumps to meet the requirements for map matching.
arXiv Detail & Related papers (2024-07-03T16:08:05Z) - HRNet: Differentially Private Hierarchical and Multi-Resolution Network for Human Mobility Data Synthesization [19.017342515321918]
We introduce the Hierarchical and Multi-Resolution Network (HRNet), a novel deep generative model designed to synthesize realistic human mobility data.
We first identify the key difficulties inherent in learning human mobility data under differential privacy.
HRNet integrates three components: a hierarchical location encoding mechanism, multi-task learning across multiple resolutions, and private pre-training.
arXiv Detail & Related papers (2024-05-13T12:56:24Z) - JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds [79.00975648564483]
Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios.
This dataset provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective.
The objective is to predict the future positions of agents relative to the robot using raw sensory input data.
arXiv Detail & Related papers (2023-11-05T18:59:31Z) - Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation [20.526071564917274]
Mobility data can be used to build machine learning (ML) models for location-based services (LBS)
However, the convenience comes with the risk of privacy leakage since this type of data might contain sensitive information related to user identities, such as home/work locations.
We design a privacy attack suite containing data extraction and membership inference attacks tailored for point-of-interest (POI) recommendation models.
arXiv Detail & Related papers (2023-10-28T06:17:52Z) - Where you go is who you are -- A study on machine learning based
semantic privacy attacks [3.259843027596329]
We present a systematic analysis of two attack scenarios, namely location categorization and user profiling.
Experiments on the Foursquare dataset and tracking data demonstrate the potential for abuse of high-quality spatial information.
Our findings point out the risks of ever-growing databases of tracking data and spatial context data.
arXiv Detail & Related papers (2023-10-26T17:56:50Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - On Inferring User Socioeconomic Status with Mobility Records [61.0966646857356]
We propose a socioeconomic-aware deep model called DeepSEI.
The DeepSEI model incorporates two networks called deep network and recurrent network.
We conduct extensive experiments on real mobility records data, POI data and house prices data.
arXiv Detail & Related papers (2022-11-15T15:07:45Z) - Privacy-Aware Adversarial Network in Human Mobility Prediction [11.387235721659378]
User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications.
We propose an LSTM-based adversarial representation learning to attain a privacy-preserving feature representation of the original geolocated data.
We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility.
arXiv Detail & Related papers (2022-08-09T19:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.