Private and Utility Enhanced Recommendations with Local Differential
Privacy and Gaussian Mixture Model
- URL: http://arxiv.org/abs/2102.13453v1
- Date: Fri, 26 Feb 2021 13:15:23 GMT
- Title: Private and Utility Enhanced Recommendations with Local Differential
Privacy and Gaussian Mixture Model
- Authors: Jeyamohan Neera, Xiaomin Chen, Nauman Aslam, Kezhi Wang and Zhan Shu
- Abstract summary: Local differential privacy (LDP) based perturbation mechanisms add noise to users data at user side before sending it to the Service Providers (SP)
Although LDP protects the privacy of users from SP, it causes a substantial decline in predictive accuracy.
Our proposed LDP based recommendation system improves the recommendation accuracy without violating LDP principles.
- Score: 14.213973630742666
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recommendation systems rely heavily on users behavioural and preferential
data (e.g. ratings, likes) to produce accurate recommendations. However, users
experience privacy concerns due to unethical data aggregation and analytical
practices carried out by the Service Providers (SP). Local differential privacy
(LDP) based perturbation mechanisms add noise to users data at user side before
sending it to the SP. The SP then uses the perturbed data to perform
recommendations. Although LDP protects the privacy of users from SP, it causes
a substantial decline in predictive accuracy. To address this issue, we propose
an LDP-based Matrix Factorization (MF) with a Gaussian Mixture Model (MoG). The
LDP perturbation mechanism, Bounded Laplace (BLP), regulates the effect of
noise by confining the perturbed ratings to a predetermined domain. We derive a
sufficient condition of the scale parameter for BLP to satisfy $\epsilon$ LDP.
At the SP, The MoG model estimates the noise added to perturbed ratings and the
MF algorithm predicts missing ratings. Our proposed LDP based recommendation
system improves the recommendation accuracy without violating LDP principles.
The empirical evaluations carried out on three real world datasets, i.e.,
Movielens, Libimseti and Jester, demonstrate that our method offers a
substantial increase in predictive accuracy under strong privacy guarantee.
Related papers
- Data value estimation on private gradients [84.966853523107]
For gradient-based machine learning (ML) methods, the de facto differential privacy technique is perturbing the gradients with random noise.
Data valuation attributes the ML performance to the training data and is widely used in privacy-aware applications that require enforcing DP.
We show that the answer is no with the default approach of injecting i.i.d.random noise to the gradients because the estimation uncertainty of the data value estimation paradoxically linearly scales with more estimation budget.
We propose to instead inject carefully correlated noise to provably remove the linear scaling of estimation uncertainty w.r.t.the budget.
arXiv Detail & Related papers (2024-12-22T13:15:51Z) - $(ε, δ)$-Differentially Private Partial Least Squares Regression [1.8666451604540077]
We propose an $(epsilon, delta)$-differentially private PLS (edPLS) algorithm to ensure the privacy of the data underlying the model.
Experimental results demonstrate that edPLS effectively renders privacy attacks, aimed at recovering unique sources of variability in the training data.
arXiv Detail & Related papers (2024-12-12T10:49:55Z) - Sketches-based join size estimation under local differential privacy [3.0945730947183203]
Join size estimation on sensitive data poses a risk of privacy leakage.
Local differential privacy (LDP) is a solution to preserve privacy while collecting sensitive data.
We introduce a novel algorithm called LDPJoinSketch for sketch-based join size estimation under LDP.
arXiv Detail & Related papers (2024-05-19T01:21:54Z) - Bayesian Frequency Estimation Under Local Differential Privacy With an Adaptive Randomized Response Mechanism [0.4604003661048266]
We propose AdOBEst-LDP, a new algorithm for adaptive, online Bayesian estimation of categorical distributions under local differential privacy.
By adapting the subset selection process to the past privatized data via Bayesian estimation, the algorithm improves the utility of future privatized data.
arXiv Detail & Related papers (2024-05-11T13:59:52Z) - Towards the Flatter Landscape and Better Generalization in Federated
Learning under Client-level Differential Privacy [67.33715954653098]
We propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP.
Specifically, DP-FedSAM integrates Sharpness Aware of Minimization (SAM) to generate local flatness models with stability and weight robustness.
To further reduce the magnitude random noise while achieving better performance, we propose DP-FedSAM-$top_k$ by adopting the local update sparsification technique.
arXiv Detail & Related papers (2023-05-01T15:19:09Z) - Locally Differentially Private Bayesian Inference [23.882144188177275]
Local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
We provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP.
arXiv Detail & Related papers (2021-10-27T13:36:43Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Control Variates for Slate Off-Policy Evaluation [112.35528337130118]
We study the problem of off-policy evaluation from batched contextual bandit data with multidimensional actions.
We obtain new estimators with risk improvement guarantees over both the PI and self-normalized PI estimators.
arXiv Detail & Related papers (2021-06-15T06:59:53Z) - Gaussian Processes with Differential Privacy [3.934224774675743]
We add strong privacy protection to Gaussian processes (GPs) via differential privacy (DP)
We achieve this by using sparse GP methodology and publishing a private variational approximation on known inducing points.
Our experiments demonstrate that given sufficient amount of data, the method can produce accurate models under strong privacy protection.
arXiv Detail & Related papers (2021-06-01T13:23:16Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.