Social-Aware Clustered Federated Learning with Customized Privacy Preservation
- URL: http://arxiv.org/abs/2212.13992v3
- Date: Tue, 19 Mar 2024 14:51:32 GMT
- Title: Social-Aware Clustered Federated Learning with Customized Privacy Preservation
- Authors: Yuntao Wang, Zhou Su, Yanghe Pan, Tom H Luan, Ruidong Li, Shui Yu,
- Abstract summary: We propose a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster.
By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals.
Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.
- Score: 38.00035804720786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. We unfold the design of SCFL in three steps.i) Stable social cluster formation. Considering users' heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping}. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants' model updates depending on social trust degrees. iii) Distributed convergence}. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.
Related papers
- Camel: Communication-Efficient and Maliciously Secure Federated Learning in the Shuffle Model of Differential Privacy [9.100955087185811]
Federated learning (FL) has rapidly become a compelling paradigm that enables multiple clients to jointly train a model by sharing only gradient updates for aggregation.
In order to protect the gradient updates which could also be privacy-sensitive, there has been a line of work studying local differential privacy mechanisms.
We present Camel, a new communication-efficient and maliciously secure FL framework in the shuffle model of DP.
arXiv Detail & Related papers (2024-10-04T13:13:44Z) - Privacy-preserving gradient-based fair federated learning [0.0]
Federated learning (FL) schemes allow multiple participants to collaboratively train neural networks without the need to share the underlying data.
In our paper, we build upon seminal works and present a novel, fair and privacy-preserving FL scheme.
arXiv Detail & Related papers (2024-07-18T19:56:39Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - How Much Privacy Does Federated Learning with Secure Aggregation
Guarantee? [22.7443077369789]
Federated learning (FL) has attracted growing interest for enabling privacy-preserving machine learning on data stored at multiple users.
While data never leaves users' devices, privacy still cannot be guaranteed since significant computations on users' training data are shared in the form of trained local models.
Secure Aggregation (SA) has been developed as a framework to preserve privacy in FL.
arXiv Detail & Related papers (2022-08-03T18:44:17Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Federated Social Recommendation with Graph Neural Network [69.36135187771929]
We propose fusing social information with user-item interactions to alleviate it, which is the social recommendation problem.
We devise a novel framework textbfFedrated textbfSocial recommendation with textbfGraph neural network (FeSoG)
arXiv Detail & Related papers (2021-11-21T09:41:39Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Compression Boosts Differentially Private Federated Learning [0.7742297876120562]
Federated learning allows distributed entities to train a common model collaboratively without sharing their own data.
It remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants' training data from the captured gradients.
We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes.
arXiv Detail & Related papers (2020-11-10T13:11:03Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.