PateGail: A Privacy-Preserving Mobility Trajectory Generator with Imitation Learning
- URL: http://arxiv.org/abs/2407.16729v1
- Date: Tue, 23 Jul 2024 14:59:23 GMT
- Title: PateGail: A Privacy-Preserving Mobility Trajectory Generator with Imitation Learning
- Authors: Huandong Wang, Changzheng Gao, Yuchen Wu, Depeng Jin, Lina Yao, Yong Li,
- Abstract summary: PateGail is a privacy-preserving imitation learning model to generate human mobility trajectories.
Personal discriminators are trained locally to distinguish and reward the real and generated human trajectories.
Our model is able to resemble real-world trajectories in terms of five key statistical metrics, outperforming state-of-the-art algorithms by over 48.03%.
- Score: 32.24962222854073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating human mobility trajectories is of great importance to solve the lack of large-scale trajectory data in numerous applications, which is caused by privacy concerns. However, existing mobility trajectory generation methods still require real-world human trajectories centrally collected as the training data, where there exists an inescapable risk of privacy leakage. To overcome this limitation, in this paper, we propose PateGail, a privacy-preserving imitation learning model to generate mobility trajectories, which utilizes the powerful generative adversary imitation learning model to simulate the decision-making process of humans. Further, in order to protect user privacy, we train this model collectively based on decentralized mobility data stored in user devices, where personal discriminators are trained locally to distinguish and reward the real and generated human trajectories. In the training process, only the generated trajectories and their rewards obtained based on personal discriminators are shared between the server and devices, whose privacy is further preserved by our proposed perturbation mechanisms with theoretical proof to satisfy differential privacy. Further, to better model the human decision-making process, we propose a novel aggregation mechanism of the rewards obtained from personal discriminators. We theoretically prove that under the reward obtained based on the aggregation mechanism, our proposed model maximizes the lower bound of the discounted total rewards of users. Extensive experiments show that the trajectories generated by our model are able to resemble real-world trajectories in terms of five key statistical metrics, outperforming state-of-the-art algorithms by over 48.03%. Furthermore, we demonstrate that the synthetic trajectories are able to efficiently support practical applications, including mobility prediction and location recommendation.
Related papers
- Revisiting Synthetic Human Trajectories: Imitative Generation and Benchmarks Beyond Datasaurus [4.522142161017109]
Human trajectory data is challenging to obtain due to practical constraints and privacy concerns.
We propose MIRAGE, a huMan-Imitative tRAjectory GenErative model designed as a neural Temporal Point Process.
We conduct a thorough evaluation of MIRAGE on three real-world user trajectory datasets against a sizeable collection of baselines.
arXiv Detail & Related papers (2024-09-20T09:07:27Z) - Learning to Generate Pseudo Personal Mobility [19.59336507266489]
We propose a novel individual-based human mobility generator called GeoAvatar.
We have achieved the generation of heterogeneous individual human mobility data without accessing individual-level personal information.
arXiv Detail & Related papers (2023-12-18T15:29:20Z) - JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds [79.00975648564483]
Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios.
This dataset provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective.
The objective is to predict the future positions of agents relative to the robot using raw sensory input data.
arXiv Detail & Related papers (2023-11-05T18:59:31Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - CATS: Conditional Adversarial Trajectory Synthesis for
Privacy-Preserving Trajectory Data Publication Using Deep Learning Approaches [2.194575078433007]
Conditional Adjectory Synthesis (CATS) is a deep-learning-based methodological framework for privacy-temporal trajectory data generation and publication.
The experiment results on over 90k GPS trajectories show that our method has a better performance inpreserving privacy, characteristic preservation, and downstream utility compared with baseline methods.
arXiv Detail & Related papers (2023-09-20T18:52:56Z) - Dual Student Networks for Data-Free Model Stealing [79.67498803845059]
Two main challenges are estimating gradients of the target model without access to its parameters, and generating a diverse set of training samples.
We propose a Dual Student method where two students are symmetrically trained in order to provide the generator a criterion to generate samples that the two students disagree on.
We show that our new optimization framework provides more accurate gradient estimation of the target model and better accuracies on benchmark classification datasets.
arXiv Detail & Related papers (2023-09-18T18:11:31Z) - Continuous Trajectory Generation Based on Two-Stage GAN [50.55181727145379]
We propose a novel two-stage generative adversarial framework to generate the continuous trajectory on the road network.
Specifically, we build the generator under the human mobility hypothesis of the A* algorithm to learn the human mobility behavior.
For the discriminator, we combine the sequential reward with the mobility yaw reward to enhance the effectiveness of the generator.
arXiv Detail & Related papers (2023-01-16T09:54:02Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Reward Conditioned Neural Movement Primitives for Population Based
Variational Policy Optimization [4.559353193715442]
This paper studies the reward based policy exploration problem in a supervised learning approach.
We show that our method provides stable learning progress and significant sample efficiency compared to a number of state-of-the-art robotic reinforcement learning methods.
arXiv Detail & Related papers (2020-11-09T09:53:37Z) - LSTM-TrajGAN: A Deep Learning Approach to Trajectory Privacy Protection [2.1793134762413437]
We propose an end-to-end deep learning model to generate privacy-preserving synthetic trajectory data for data sharing and publication.
The model is evaluated on the trajectory-user-linking task on a real-world semantic trajectory dataset.
arXiv Detail & Related papers (2020-06-14T03:04:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.