Private and Utility Enhanced Recommendations with Local Differential
Privacy and Gaussian Mixture Model
- URL: http://arxiv.org/abs/2102.13453v1
- Date: Fri, 26 Feb 2021 13:15:23 GMT
- Title: Private and Utility Enhanced Recommendations with Local Differential
Privacy and Gaussian Mixture Model
- Authors: Jeyamohan Neera, Xiaomin Chen, Nauman Aslam, Kezhi Wang and Zhan Shu
- Abstract summary: Local differential privacy (LDP) based perturbation mechanisms add noise to users data at user side before sending it to the Service Providers (SP)
Although LDP protects the privacy of users from SP, it causes a substantial decline in predictive accuracy.
Our proposed LDP based recommendation system improves the recommendation accuracy without violating LDP principles.
- Score: 14.213973630742666
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recommendation systems rely heavily on users behavioural and preferential
data (e.g. ratings, likes) to produce accurate recommendations. However, users
experience privacy concerns due to unethical data aggregation and analytical
practices carried out by the Service Providers (SP). Local differential privacy
(LDP) based perturbation mechanisms add noise to users data at user side before
sending it to the SP. The SP then uses the perturbed data to perform
recommendations. Although LDP protects the privacy of users from SP, it causes
a substantial decline in predictive accuracy. To address this issue, we propose
an LDP-based Matrix Factorization (MF) with a Gaussian Mixture Model (MoG). The
LDP perturbation mechanism, Bounded Laplace (BLP), regulates the effect of
noise by confining the perturbed ratings to a predetermined domain. We derive a
sufficient condition of the scale parameter for BLP to satisfy $\epsilon$ LDP.
At the SP, The MoG model estimates the noise added to perturbed ratings and the
MF algorithm predicts missing ratings. Our proposed LDP based recommendation
system improves the recommendation accuracy without violating LDP principles.
The empirical evaluations carried out on three real world datasets, i.e.,
Movielens, Libimseti and Jester, demonstrate that our method offers a
substantial increase in predictive accuracy under strong privacy guarantee.
Related papers
- Uncertainty-Penalized Direct Preference Optimization [52.387088396044206]
We develop a pessimistic framework for DPO by introducing preference uncertainty penalization schemes.
The penalization serves as a correction to the loss which attenuates the loss gradient for uncertain samples.
We show improved overall performance compared to vanilla DPO, as well as better completions on prompts from high-uncertainty chosen/rejected responses.
arXiv Detail & Related papers (2024-10-26T14:24:37Z) - Sketches-based join size estimation under local differential privacy [3.0945730947183203]
Join size estimation on sensitive data poses a risk of privacy leakage.
Local differential privacy (LDP) is a solution to preserve privacy while collecting sensitive data.
We introduce a novel algorithm called LDPJoinSketch for sketch-based join size estimation under LDP.
arXiv Detail & Related papers (2024-05-19T01:21:54Z) - Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy [7.264378254137811]
Differential privacy (DP) can measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset.
DP has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google.
We propose per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances.
arXiv Detail & Related papers (2024-04-24T06:51:16Z) - Towards the Flatter Landscape and Better Generalization in Federated
Learning under Client-level Differential Privacy [67.33715954653098]
We propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP.
Specifically, DP-FedSAM integrates Sharpness Aware of Minimization (SAM) to generate local flatness models with stability and weight robustness.
To further reduce the magnitude random noise while achieving better performance, we propose DP-FedSAM-$top_k$ by adopting the local update sparsification technique.
arXiv Detail & Related papers (2023-05-01T15:19:09Z) - OpBoost: A Vertical Federated Tree Boosting Framework Based on
Order-Preserving Desensitization [26.386265547513887]
Vertical Federated Learning (FL) is a new paradigm that enables users with non-overlapping attributes of the same data samples to jointly train a model without sharing the raw data.
Recent works show that it's still not sufficient to prevent privacy leakage from the training process or the trained model.
This paper focuses on studying the privacy-preserving tree boosting algorithms under the vertical FL.
arXiv Detail & Related papers (2022-10-04T02:21:18Z) - Locally Differentially Private Bayesian Inference [23.882144188177275]
Local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
We provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP.
arXiv Detail & Related papers (2021-10-27T13:36:43Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Control Variates for Slate Off-Policy Evaluation [112.35528337130118]
We study the problem of off-policy evaluation from batched contextual bandit data with multidimensional actions.
We obtain new estimators with risk improvement guarantees over both the PI and self-normalized PI estimators.
arXiv Detail & Related papers (2021-06-15T06:59:53Z) - Gaussian Processes with Differential Privacy [3.934224774675743]
We add strong privacy protection to Gaussian processes (GPs) via differential privacy (DP)
We achieve this by using sparse GP methodology and publishing a private variational approximation on known inducing points.
Our experiments demonstrate that given sufficient amount of data, the method can produce accurate models under strong privacy protection.
arXiv Detail & Related papers (2021-06-01T13:23:16Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.