Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation
- URL: http://arxiv.org/abs/2310.18606v2
- Date: Fri, 5 Jul 2024 20:17:38 GMT
- Title: Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation
- Authors: Kunlin Cai, Jinghuai Zhang, Zhiqing Hong, Will Shand, Guang Wang, Desheng Zhang, Jianfeng Chi, Yuan Tian,
- Abstract summary: Mobility data can be used to build machine learning (ML) models for location-based services (LBS)
However, the convenience comes with the risk of privacy leakage since this type of data might contain sensitive information related to user identities, such as home/work locations.
We design a privacy attack suite containing data extraction and membership inference attacks tailored for point-of-interest (POI) recommendation models.
- Score: 20.526071564917274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As location-based services (LBS) have grown in popularity, more human mobility data has been collected. The collected data can be used to build machine learning (ML) models for LBS to enhance their performance and improve overall experience for users. However, the convenience comes with the risk of privacy leakage since this type of data might contain sensitive information related to user identities, such as home/work locations. Prior work focuses on protecting mobility data privacy during transmission or prior to release, lacking the privacy risk evaluation of mobility data-based ML models. To better understand and quantify the privacy leakage in mobility data-based ML models, we design a privacy attack suite containing data extraction and membership inference attacks tailored for point-of-interest (POI) recommendation models, one of the most widely used mobility data-based ML models. These attacks in our attack suite assume different adversary knowledge and aim to extract different types of sensitive information from mobility data, providing a holistic privacy risk assessment for POI recommendation models. Our experimental evaluation using two real-world mobility datasets demonstrates that current POI recommendation models are vulnerable to our attacks. We also present unique findings to understand what types of mobility data are more susceptible to privacy attacks. Finally, we evaluate defenses against these attacks and highlight future directions and challenges. Our attack suite is released at https://github.com/KunlinChoi/POIPrivacy.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - Privacy-Preserving Debiasing using Data Augmentation and Machine Unlearning [3.049887057143419]
Data augmentation exposes machine learning models to privacy attacks, such as membership inference attacks.
We propose an effective combination of data augmentation and machine unlearning, which can reduce data bias while providing a provable defense against known attacks.
arXiv Detail & Related papers (2024-04-19T21:54:20Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Where you go is who you are -- A study on machine learning based
semantic privacy attacks [3.259843027596329]
We present a systematic analysis of two attack scenarios, namely location categorization and user profiling.
Experiments on the Foursquare dataset and tracking data demonstrate the potential for abuse of high-quality spatial information.
Our findings point out the risks of ever-growing databases of tracking data and spatial context data.
arXiv Detail & Related papers (2023-10-26T17:56:50Z) - RecUP-FL: Reconciling Utility and Privacy in Federated Learning via
User-configurable Privacy Defense [9.806681555309519]
Federated learning (FL) allows clients to collaboratively train a model without sharing their private data.
Recent studies have shown that private information can still be leaked through shared gradients.
We propose a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes.
arXiv Detail & Related papers (2023-04-11T10:59:45Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Privacy-Aware Adversarial Network in Human Mobility Prediction [11.387235721659378]
User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications.
We propose an LSTM-based adversarial representation learning to attain a privacy-preserving feature representation of the original geolocated data.
We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility.
arXiv Detail & Related papers (2022-08-09T19:23:13Z) - Survey: Leakage and Privacy at Inference Time [59.957056214792665]
Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance.
We focus on inference-time leakage, as the most likely scenario for publicly available models.
We propose a taxonomy across involuntary and malevolent leakage, available defences, followed by the currently available assessment metrics and applications.
arXiv Detail & Related papers (2021-07-04T12:59:16Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.