PrivMVMF: Privacy-Preserving Multi-View Matrix Factorization for
Recommender Systems
- URL: http://arxiv.org/abs/2210.07775v1
- Date: Thu, 29 Sep 2022 03:21:24 GMT
- Title: PrivMVMF: Privacy-Preserving Multi-View Matrix Factorization for
Recommender Systems
- Authors: Peihua Mai, Yan Pang
- Abstract summary: We propose a new privacy-preserving framework based on homomorphic encryption, Privacy-Preserving Multi-View Matrix Factorization (PrivMVMF)
PrivMVMF is successfully implemented and tested thoroughly with the MovieLens dataset.
- Score: 0.190365714903665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With an increasing focus on data privacy, there have been pilot studies on
recommender systems in a federated learning (FL) framework, where multiple
parties collaboratively train a model without sharing their data. Most of these
studies assume that the conventional FL framework can fully protect user
privacy. However, there are serious privacy risks in matrix factorization in
federated recommender systems based on our study. This paper first provides a
rigorous theoretical analysis of the server reconstruction attack in four
scenarios in federated recommender systems, followed by comprehensive
experiments. The empirical results demonstrate that the FL server could infer
users' information with accuracy >80% based on the uploaded gradients from FL
nodes. The robustness analysis suggests that our reconstruction attack analysis
outperforms the random guess by >30% under Laplace noises with b no larger than
0.5 for all scenarios. Then, the paper proposes a new privacy-preserving
framework based on homomorphic encryption, Privacy-Preserving Multi-View Matrix
Factorization (PrivMVMF), to enhance user data privacy protection in federated
recommender systems. The proposed PrivMVMF is successfully implemented and
tested thoroughly with the MovieLens dataset.
Related papers
- Federated Learning on Riemannian Manifolds with Differential Privacy [8.75592575216789]
A malicious adversary can potentially infer sensitive information through various means.
We propose a generic private FL framework defined on the differential privacy (DP) technique.
We analyze the privacy guarantee while establishing the convergence properties.
Numerical simulations are performed on synthetic and real-world datasets to showcase the efficacy of the proposed PriRFed approach.
arXiv Detail & Related papers (2024-04-15T12:32:20Z) - Federated Learning Empowered by Generative Content [55.576885852501775]
Federated learning (FL) enables leveraging distributed private data for model training in a privacy-preserving way.
We propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
We conduct a systematic empirical study on FedGC, covering diverse baselines, datasets, scenarios, and modalities.
arXiv Detail & Related papers (2023-12-10T07:38:56Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Decentralized Matrix Factorization with Heterogeneous Differential
Privacy [2.4743508801114444]
We propose a novel Heterogeneous Differentially Private Matrix Factorization algorithm (denoted as HDPMF) for untrusted recommender.
Our framework uses modified stretching mechanism with an innovative rescaling scheme to achieve better trade off between privacy and accuracy.
arXiv Detail & Related papers (2022-12-01T06:48:18Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - A Federated Multi-View Deep Learning Framework for Privacy-Preserving
Recommendations [25.484225182093947]
Privacy-preserving recommendations are gaining momentum due to concerns over user privacy and data security.
FedRec algorithms have been proposed to realize personalized privacy-preserving recommendations.
This paper presents FLMV-DSSM, a generic content-based federated multi-view recommendation framework.
arXiv Detail & Related papers (2020-08-25T04:19:40Z) - Privacy Threats Against Federated Matrix Factorization [14.876668437269817]
We study the privacy threats of the matrix factorization method in the federated learning framework.
This is the first study of privacy threats of the matrix factorization method in the federated learning framework.
arXiv Detail & Related papers (2020-07-03T09:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.