A Trajectory K-Anonymity Model Based on Point Density and Partition
- URL: http://arxiv.org/abs/2307.16849v1
- Date: Mon, 31 Jul 2023 17:10:56 GMT
- Title: A Trajectory K-Anonymity Model Based on Point Density and Partition
- Authors: Wanshu Yu, Haonan Shi and Hongyun Xu
- Abstract summary: This paper develops a trajectory K-anonymity model based on Point Density and Partition (K PDP)
It successfully resists re-identification attacks and reduces the data utility loss of the k-anonymized dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As people's daily life becomes increasingly inseparable from various mobile
electronic devices, relevant service application platforms and network
operators can collect numerous individual information easily. When releasing
these data for scientific research or commercial purposes, users' privacy will
be in danger, especially in the publication of spatiotemporal trajectory
datasets. Therefore, to avoid the leakage of users' privacy, it is necessary to
anonymize the data before they are released. However, more than simply removing
the unique identifiers of individuals is needed to protect the trajectory
privacy, because some attackers may infer the identity of users by the
connection with other databases. Much work has been devoted to merging multiple
trajectories to avoid re-identification, but these solutions always require
sacrificing data quality to achieve the anonymity requirement. In order to
provide sufficient privacy protection for users' trajectory datasets, this
paper develops a study on trajectory privacy against re-identification attacks,
proposing a trajectory K-anonymity model based on Point Density and Partition
(KPDP). Our approach improves the existing trajectory generalization
anonymization techniques regarding trajectory set partition preprocessing and
trajectory clustering algorithms. It successfully resists re-identification
attacks and reduces the data utility loss of the k-anonymized dataset. A series
of experiments on a real-world dataset show that the proposed model has
significant advantages in terms of higher data utility and shorter algorithm
execution time than other existing techniques.
Related papers
- Privacy-preserving datasets by capturing feature distributions with Conditional VAEs [0.11999555634662634]
Conditional Variational Autoencoders (CVAEs) trained on feature vectors extracted from large pre-trained vision foundation models.
Our method notably outperforms traditional approaches in both medical and natural image domains.
Results underscore the potential of generative models to significantly impact deep learning applications in data-scarce and privacy-sensitive environments.
arXiv Detail & Related papers (2024-08-01T15:26:24Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - Linear Model with Local Differential Privacy [0.225596179391365]
Privacy preserving techniques have been widely studied to analyze distributed data across different agencies.
Secure multiparty computation has been widely studied for privacy protection with high privacy level but intense cost.
matrix masking technique is applied to encrypt data such that the secure schemes are against malicious adversaries.
arXiv Detail & Related papers (2022-02-05T01:18:00Z) - Joint Optimization in Edge-Cloud Continuum for Federated Unsupervised
Person Re-identification [24.305773593017932]
FedUReID is a federated unsupervised person ReID system to learn person ReID models without any labels while preserving privacy.
To tackle the problem that edges vary in data volumes and distributions, we personalize training in edges with joint optimization of cloud and edge.
Experiments on eight person ReID datasets demonstrate that FedUReID achieves higher accuracy but also reduces computation cost by 29%.
arXiv Detail & Related papers (2021-08-14T08:35:55Z) - Deep Directed Information-Based Learning for Privacy-Preserving Smart
Meter Data Release [30.409342804445306]
We study the problem in the context of time series data and smart meters (SMs) power consumption measurements.
We introduce the Directed Information (DI) as a more meaningful measure of privacy in the considered setting.
Our empirical studies on real-world data sets from SMs measurements in the worst-case scenario show the existing trade-offs between privacy and utility.
arXiv Detail & Related papers (2020-11-20T13:41:11Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.