Practical Privacy Preserving POI Recommendation
- URL: http://arxiv.org/abs/2003.02834v2
- Date: Mon, 27 Apr 2020 06:11:26 GMT
- Title: Practical Privacy Preserving POI Recommendation
- Authors: Chaochao Chen, Jun Zhou, Bingzhe Wu, Wenjin Fang, Li Wang, Yuan Qi,
Xiaolin Zheng
- Abstract summary: We propose a novel Privacy preserving POI Recommendation (PriRec) framework.
PriRec keeps users' private raw data and models in users' own hands, and protects user privacy to a large extent.
We apply PriRec in real-world datasets, and comprehensive experiments demonstrate that, compared with FM, PriRec achieves comparable or even better recommendation accuracy.
- Score: 26.096197310800328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point-of-Interest (POI) recommendation has been extensively studied and
successfully applied in industry recently. However, most existing approaches
build centralized models on the basis of collecting users' data. Both private
data and models are held by the recommender, which causes serious privacy
concerns. In this paper, we propose a novel Privacy preserving POI
Recommendation (PriRec) framework. First, to protect data privacy, users'
private data (features and actions) are kept on their own side, e.g., Cellphone
or Pad. Meanwhile, the public data need to be accessed by all the users are
kept by the recommender to reduce the storage costs of users' devices. Those
public data include: (1) static data only related to the status of POI, such as
POI categories, and (2) dynamic data depend on user-POI actions such as visited
counts. The dynamic data could be sensitive, and we develop local differential
privacy techniques to release such data to public with privacy guarantees.
Second, PriRec follows the representations of Factorization Machine (FM) that
consists of linear model and the feature interaction model. To protect the
model privacy, the linear models are saved on users' side, and we propose a
secure decentralized gradient descent protocol for users to learn it
collaboratively. The feature interaction model is kept by the recommender since
there is no privacy risk, and we adopt secure aggregation strategy in federated
learning paradigm to learn it. To this end, PriRec keeps users' private raw
data and models in users' own hands, and protects user privacy to a large
extent. We apply PriRec in real-world datasets, and comprehensive experiments
demonstrate that, compared with FM, PriRec achieves comparable or even better
recommendation accuracy.
Related papers
- Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Differentially Private Model-Based Offline Reinforcement Learning [51.1231068185106]
We introduce DP-MORL, an algorithm coming with differential privacy guarantees.
A private model of the environment is first learned from offline data.
We then use model-based policy optimization to derive a policy from the private model.
arXiv Detail & Related papers (2024-02-08T10:05:11Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - A Cautionary Tale: On the Role of Reference Data in Empirical Privacy
Defenses [12.34501903200183]
We propose a baseline defense that enables the utility-privacy tradeoff with respect to both training and reference data to be easily understood.
Our experiments show that, surprisingly, it outperforms the most well-studied and current state-of-the-art empirical privacy defenses.
arXiv Detail & Related papers (2023-10-18T17:07:07Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Privacy Implications of Retrieval-Based Language Models [26.87950501433784]
We present the first study of privacy risks in retrieval-based LMs, particularly $k$NN-LMs.
We find that $k$NN-LMs are more susceptible to leaking private information from their private datastore than parametric models.
arXiv Detail & Related papers (2023-05-24T08:37:27Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Personalized PATE: Differential Privacy for Machine Learning with
Individual Privacy Guarantees [1.2691047660244335]
We propose three novel methods to support training an ML model with different personalized privacy guarantees within the training data.
Our experiments show that our personalized privacy methods yield higher accuracy models than the non-personalized baseline.
arXiv Detail & Related papers (2022-02-21T20:16:27Z) - Federating Recommendations Using Differentially Private Prototypes [16.29544153550663]
We propose a new federated approach to learning global and local private models for recommendation without collecting raw data.
By requiring only two rounds of communication, we both reduce the communication costs and avoid the excessive privacy loss.
We show local adaptation of the global model allows our method to outperform centralized matrix-factorization-based recommender system models.
arXiv Detail & Related papers (2020-03-01T22:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.