Towards Differential Privacy in Sequential Recommendation: A Noisy Graph
Neural Network Approach
- URL: http://arxiv.org/abs/2309.11515v2
- Date: Tue, 30 Jan 2024 03:03:39 GMT
- Title: Towards Differential Privacy in Sequential Recommendation: A Noisy Graph
Neural Network Approach
- Authors: Wentao Hu, Hui Fang
- Abstract summary: Differential privacy has been widely adopted to preserve privacy in recommender systems.
Existing differentially private recommender systems only consider static and independent interactions.
We propose a novel DIfferentially Private Sequential recommendation framework with a noisy Graph Neural Network approach.
- Score: 2.4743508801114444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With increasing frequency of high-profile privacy breaches in various online
platforms, users are becoming more concerned about their privacy. And
recommender system is the core component of online platforms for providing
personalized service, consequently, its privacy preservation has attracted
great attention. As the gold standard of privacy protection, differential
privacy has been widely adopted to preserve privacy in recommender systems.
However, existing differentially private recommender systems only consider
static and independent interactions, so they cannot apply to sequential
recommendation where behaviors are dynamic and dependent. Meanwhile, little
attention has been paid on the privacy risk of sensitive user features, most of
them only protect user feedbacks. In this work, we propose a novel
DIfferentially Private Sequential recommendation framework with a noisy Graph
Neural Network approach (denoted as DIPSGNN) to address these limitations. To
the best of our knowledge, we are the first to achieve differential privacy in
sequential recommendation with dependent interactions. Specifically, in
DIPSGNN, we first leverage piecewise mechanism to protect sensitive user
features. Then, we innovatively add calibrated noise into aggregation step of
graph neural network based on aggregation perturbation mechanism. And this
noisy graph neural network can protect sequentially dependent interactions and
capture user preferences simultaneously. Extensive experiments demonstrate the
superiority of our method over state-of-the-art differentially private
recommender systems in terms of better balance between privacy and accuracy.
Related papers
- Privacy-Preserving Dynamic Assortment Selection [4.399892832075127]
This paper presents a novel framework for privacy-preserving dynamic assortment selection using the multinomial logit (MNL) bandits model.
Our approach integrates noise into user utility estimates to balance between exploration and exploitation while ensuring robust privacy protection.
arXiv Detail & Related papers (2024-10-29T19:28:01Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Hiding Your Awful Online Choices Made More Efficient and Secure: A New Privacy-Aware Recommender System [5.397825778465797]
This paper presents a novel privacy-aware recommender system that combines privacy-aware machine learning algorithms for practical scalability and efficiency with cryptographic primitives for solid privacy guarantees.
For the first time our method makes it feasible to compute private recommendations for datasets containing 100 million entries, even on memory-constrained low-power SOC (System on Chip) devices.
arXiv Detail & Related papers (2024-05-30T21:08:42Z) - Preserving Node-level Privacy in Graph Neural Networks [8.823710998526705]
We propose a solution that addresses the issue of node-level privacy in Graph Neural Networks (GNNs)
Our protocol consists of two main components: 1) a sampling routine called HeterPoisson, which employs a specialized node sampling strategy and a series of tailored operations to generate a batch of sub-graphs with desired properties, and 2) a randomization routine that utilizes symmetric Laplace noise instead of the commonly used Gaussian noise.
Our protocol enables GNN learning with good performance, as demonstrated by experiments on five real-world datasets.
arXiv Detail & Related papers (2023-11-12T16:21:29Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - Privacy-Preserving Matrix Factorization for Recommendation Systems using
Gaussian Mechanism [2.84279467589473]
We propose a privacy-preserving recommendation system based on the differential privacy framework and matrix factorization.
As differential privacy is a powerful and robust mathematical framework for designing privacy-preserving machine learning algorithms, it is possible to prevent adversaries from extracting sensitive user information.
arXiv Detail & Related papers (2023-04-11T13:50:39Z) - Releasing Graph Neural Networks with Differential Privacy Guarantees [0.81308403220442]
We propose PrivGNN, a privacy-preserving framework for releasing GNN models in a centralized setting.
PrivGNN combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees.
arXiv Detail & Related papers (2021-09-18T11:35:19Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.