Stronger Privacy for Federated Collaborative Filtering with Implicit
Feedback
- URL: http://arxiv.org/abs/2105.03941v2
- Date: Tue, 11 May 2021 10:05:37 GMT
- Title: Stronger Privacy for Federated Collaborative Filtering with Implicit
Feedback
- Authors: Lorenzo Minto, Moritz Haller, Hamed Haddadi, Benjamin Livshits
- Abstract summary: We propose a practical federated recommender system for implicit data under user-level local differential privacy (LDP)
The privacy-utility trade-off is controlled by parameters $epsilon$ and $k$, regulating the per-update privacy budget and the number of $epsilon$-LDP gradient updates sent by each user respectively.
We empirically demonstrate the effectiveness of our framework on the MovieLens dataset, achieving up to Hit Ratio with K=10 (HR@10) 0.68 on 50k users with 5k items.
- Score: 13.37601438005323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems are commonly trained on centrally collected user
interaction data like views or clicks. This practice however raises serious
privacy concerns regarding the recommender's collection and handling of
potentially sensitive data. Several privacy-aware recommender systems have been
proposed in recent literature, but comparatively little attention has been
given to systems at the intersection of implicit feedback and privacy. To
address this shortcoming, we propose a practical federated recommender system
for implicit data under user-level local differential privacy (LDP). The
privacy-utility trade-off is controlled by parameters $\epsilon$ and $k$,
regulating the per-update privacy budget and the number of $\epsilon$-LDP
gradient updates sent by each user respectively. To further protect the user's
privacy, we introduce a proxy network to reduce the fingerprinting surface by
anonymizing and shuffling the reports before forwarding them to the
recommender. We empirically demonstrate the effectiveness of our framework on
the MovieLens dataset, achieving up to Hit Ratio with K=10 (HR@10) 0.68 on 50k
users with 5k items. Even on the full dataset, we show that it is possible to
achieve reasonable utility with HR@10>0.5 without compromising user privacy.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Segmented Private Data Aggregation in the Multi-message Shuffle Model [6.436165623346879]
We pioneer the study of segmented private data aggregation within the multi-message shuffle model of differential privacy.
Our framework introduces flexible privacy protection for users and enhanced utility for the aggregation server.
Our framework achieves a reduction of about 50% in estimation error compared to existing approaches.
arXiv Detail & Related papers (2024-07-29T01:46:44Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Hiding Your Awful Online Choices Made More Efficient and Secure: A New Privacy-Aware Recommender System [5.397825778465797]
This paper presents a novel privacy-aware recommender system that combines privacy-aware machine learning algorithms for practical scalability and efficiency with cryptographic primitives for solid privacy guarantees.
For the first time our method makes it feasible to compute private recommendations for datasets containing 100 million entries, even on memory-constrained low-power SOC (System on Chip) devices.
arXiv Detail & Related papers (2024-05-30T21:08:42Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - Federated Heterogeneous Graph Neural Network for Privacy-preserving
Recommendation [45.39171059168941]
heterogeneous information network (HIN) is a potent tool for mitigating data sparsity in recommender systems.
In this paper, we suggest the HIN is partitioned into private HINs stored on the client side and shared HINs on the server.
We formalize the privacy definition for HIN-based federated recommendation (FedRec) in the light of differential privacy.
arXiv Detail & Related papers (2023-10-18T05:59:41Z) - Towards Differential Privacy in Sequential Recommendation: A Noisy Graph
Neural Network Approach [2.4743508801114444]
Differential privacy has been widely adopted to preserve privacy in recommender systems.
Existing differentially private recommender systems only consider static and independent interactions.
We propose a novel DIfferentially Private Sequential recommendation framework with a noisy Graph Neural Network approach.
arXiv Detail & Related papers (2023-09-17T03:12:33Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Privacy-Preserving Matrix Factorization for Recommendation Systems using
Gaussian Mechanism [2.84279467589473]
We propose a privacy-preserving recommendation system based on the differential privacy framework and matrix factorization.
As differential privacy is a powerful and robust mathematical framework for designing privacy-preserving machine learning algorithms, it is possible to prevent adversaries from extracting sensitive user information.
arXiv Detail & Related papers (2023-04-11T13:50:39Z) - FedCL: Federated Contrastive Learning for Privacy-Preserving
Recommendation [98.5705258907774]
FedCL can exploit high-quality negative samples for effective model training with privacy well protected.
We first infer user embeddings from local user data through the local model on each client, and then perturb them with local differential privacy (LDP)
Since individual user embedding contains heavy noise due to LDP, we propose to cluster user embeddings on the server to mitigate the influence of noise.
arXiv Detail & Related papers (2022-04-21T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.