Practical Privacy Preserving POI Recommendation
- URL: http://arxiv.org/abs/2003.02834v2
- Date: Mon, 27 Apr 2020 06:11:26 GMT
- Title: Practical Privacy Preserving POI Recommendation
- Authors: Chaochao Chen, Jun Zhou, Bingzhe Wu, Wenjin Fang, Li Wang, Yuan Qi,
Xiaolin Zheng
- Abstract summary: We propose a novel Privacy preserving POI Recommendation (PriRec) framework.
PriRec keeps users' private raw data and models in users' own hands, and protects user privacy to a large extent.
We apply PriRec in real-world datasets, and comprehensive experiments demonstrate that, compared with FM, PriRec achieves comparable or even better recommendation accuracy.
- Score: 26.096197310800328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point-of-Interest (POI) recommendation has been extensively studied and
successfully applied in industry recently. However, most existing approaches
build centralized models on the basis of collecting users' data. Both private
data and models are held by the recommender, which causes serious privacy
concerns. In this paper, we propose a novel Privacy preserving POI
Recommendation (PriRec) framework. First, to protect data privacy, users'
private data (features and actions) are kept on their own side, e.g., Cellphone
or Pad. Meanwhile, the public data need to be accessed by all the users are
kept by the recommender to reduce the storage costs of users' devices. Those
public data include: (1) static data only related to the status of POI, such as
POI categories, and (2) dynamic data depend on user-POI actions such as visited
counts. The dynamic data could be sensitive, and we develop local differential
privacy techniques to release such data to public with privacy guarantees.
Second, PriRec follows the representations of Factorization Machine (FM) that
consists of linear model and the feature interaction model. To protect the
model privacy, the linear models are saved on users' side, and we propose a
secure decentralized gradient descent protocol for users to learn it
collaboratively. The feature interaction model is kept by the recommender since
there is no privacy risk, and we adopt secure aggregation strategy in federated
learning paradigm to learn it. To this end, PriRec keeps users' private raw
data and models in users' own hands, and protects user privacy to a large
extent. We apply PriRec in real-world datasets, and comprehensive experiments
demonstrate that, compared with FM, PriRec achieves comparable or even better
recommendation accuracy.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - FT-PrivacyScore: Personalized Privacy Scoring Service for Machine Learning Participation [4.772368796656325]
In practice, controlled data access remains a mainstream method for protecting data privacy in many industrial and research environments.
We developed the demo prototype FT-PrivacyScore to show that it's possible to efficiently and quantitatively estimate the privacy risk of participating in a model fine-tuning task.
arXiv Detail & Related papers (2024-10-30T02:41:26Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - A Cautionary Tale: On the Role of Reference Data in Empirical Privacy
Defenses [12.34501903200183]
We propose a baseline defense that enables the utility-privacy tradeoff with respect to both training and reference data to be easily understood.
Our experiments show that, surprisingly, it outperforms the most well-studied and current state-of-the-art empirical privacy defenses.
arXiv Detail & Related papers (2023-10-18T17:07:07Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Personalized PATE: Differential Privacy for Machine Learning with
Individual Privacy Guarantees [1.2691047660244335]
We propose three novel methods to support training an ML model with different personalized privacy guarantees within the training data.
Our experiments show that our personalized privacy methods yield higher accuracy models than the non-personalized baseline.
arXiv Detail & Related papers (2022-02-21T20:16:27Z) - Federating Recommendations Using Differentially Private Prototypes [16.29544153550663]
We propose a new federated approach to learning global and local private models for recommendation without collecting raw data.
By requiring only two rounds of communication, we both reduce the communication costs and avoid the excessive privacy loss.
We show local adaptation of the global model allows our method to outperform centralized matrix-factorization-based recommender system models.
arXiv Detail & Related papers (2020-03-01T22:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.