FedPDD: A Privacy-preserving Double Distillation Framework for
Cross-silo Federated Recommendation
- URL: http://arxiv.org/abs/2305.06272v2
- Date: Tue, 30 Jan 2024 16:32:48 GMT
- Title: FedPDD: A Privacy-preserving Double Distillation Framework for
Cross-silo Federated Recommendation
- Authors: Sheng Wan, Dashan Gao, Hanlin Gu, Daning Hu
- Abstract summary: Cross-platform recommendation aims to improve recommendation accuracy by gathering heterogeneous features from different platforms.
Such cross-silo collaborations between platforms are restricted by increasingly stringent privacy protection regulations.
We propose a novel privacy-preserving double distillation framework named FedPDD for cross-silo federated recommendation.
- Score: 4.467445574103374
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Cross-platform recommendation aims to improve recommendation accuracy by
gathering heterogeneous features from different platforms. However, such
cross-silo collaborations between platforms are restricted by increasingly
stringent privacy protection regulations, thus data cannot be aggregated for
training. Federated learning (FL) is a practical solution to deal with the data
silo problem in recommendation scenarios. Existing cross-silo FL methods
transmit model information to collaboratively build a global model by
leveraging the data of overlapped users. However, in reality, the number of
overlapped users is often very small, thus largely limiting the performance of
such approaches. Moreover, transmitting model information during training
requires high communication costs and may cause serious privacy leakage. In
this paper, we propose a novel privacy-preserving double distillation framework
named FedPDD for cross-silo federated recommendation, which efficiently
transfers knowledge when overlapped users are limited. Specifically, our double
distillation strategy enables local models to learn not only explicit knowledge
from the other party but also implicit knowledge from its past predictions.
Moreover, to ensure privacy and high efficiency, we employ an offline training
scheme to reduce communication needs and privacy leakage risk. In addition, we
adopt differential privacy to further protect the transmitted information. The
experiments on two real-world recommendation datasets, HetRec-MovieLens and
Criteo, demonstrate the effectiveness of FedPDD compared to the
state-of-the-art approaches.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.
We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - Privacy-Preserving Federated Unlearning with Certified Client Removal [18.36632825624581]
State-of-the-art methods for unlearning use historical data from FL clients, such as gradients or locally trained models.
We propose Starfish, a privacy-preserving federated unlearning scheme using Two-Party Computation (2PC) techniques and shared historical client data between two non-colluding servers.
arXiv Detail & Related papers (2024-04-15T12:27:07Z) - Defending Against Data Reconstruction Attacks in Federated Learning: An
Information Theory Approach [21.03960608358235]
Federated Learning (FL) trains a black-box and high-dimensional model among different clients by exchanging parameters instead of direct data sharing.
FL still suffers from membership inference attacks (MIA) or data reconstruction attacks (DRA)
arXiv Detail & Related papers (2024-03-02T17:12:32Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - Federating Recommendations Using Differentially Private Prototypes [16.29544153550663]
We propose a new federated approach to learning global and local private models for recommendation without collecting raw data.
By requiring only two rounds of communication, we both reduce the communication costs and avoid the excessive privacy loss.
We show local adaptation of the global model allows our method to outperform centralized matrix-factorization-based recommender system models.
arXiv Detail & Related papers (2020-03-01T22:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.