On Privacy and Personalization in Cross-Silo Federated Learning
- URL: http://arxiv.org/abs/2206.07902v1
- Date: Thu, 16 Jun 2022 03:26:48 GMT
- Title: On Privacy and Personalization in Cross-Silo Federated Learning
- Authors: Ziyu Liu, Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
- Abstract summary: In this work, we consider the application of differential privacy in cross-silo learning (FL)
We show that mean-regularized multi-task learning (MR-MTL) is a strong baseline for cross-silo FL.
We provide a thorough empirical study of competing methods as well as a theoretical characterization of MR-MTL for a mean estimation problem.
- Score: 39.031422430404405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While the application of differential privacy (DP) has been well-studied in
cross-device federated learning (FL), there is a lack of work considering DP
for cross-silo FL, a setting characterized by a limited number of clients each
containing many data subjects. In cross-silo FL, usual notions of client-level
privacy are less suitable as real-world privacy regulations typically concern
in-silo data subjects rather than the silos themselves. In this work, we
instead consider the more realistic notion of silo-specific item-level privacy,
where silos set their own privacy targets for their local examples. Under this
setting, we reconsider the roles of personalization in federated learning. In
particular, we show that mean-regularized multi-task learning (MR-MTL), a
simple personalization framework, is a strong baseline for cross-silo FL: under
stronger privacy, silos are further incentivized to "federate" with each other
to mitigate DP noise, resulting in consistent improvements relative to standard
baseline methods. We provide a thorough empirical study of competing methods as
well as a theoretical characterization of MR-MTL for a mean estimation problem,
highlighting the interplay between privacy and cross-silo data heterogeneity.
Our work serves to establish baselines for private cross-silo FL as well as
identify key directions of future work in this area.
Related papers
- Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning [4.152322723065285]
federated learning (FL) has emerged as a prominent method in machine learning, emphasizing privacy preservation by allowing multiple clients to collaboratively build a model while keeping their training data private.
Despite this focus on privacy, FL models are susceptible to various attacks, including membership inference attacks (MIAs)
arXiv Detail & Related papers (2024-07-26T22:44:41Z) - PPFL: A Personalized Federated Learning Framework for Heterogeneous
Population [30.51508591732483]
We develop a flexible and interpretable personalized framework within the paradigm of Federated Learning, called PPFL.
By leveraging canonical models, it models the heterogeneity as clients' preferences for these vectors and employs membership preferences.
We conduct experiments on both pathological characteristics and practical datasets, and the results validate the effectiveness of PPFL.
arXiv Detail & Related papers (2023-10-22T16:06:27Z) - Advancing Personalized Federated Learning: Group Privacy, Fairness, and
Beyond [6.731000738818571]
Federated learning (FL) is a framework for training machine learning models in a distributed and collaborative manner.
In this paper, we address the triadic interaction among personalization, privacy guarantees, and fairness attained by models trained within the FL framework.
A method is put forth that introduces group privacy assurances through the utilization of $d$-privacy.
arXiv Detail & Related papers (2023-09-01T12:20:19Z) - ULDP-FL: Federated Learning with Across Silo User-Level Differential Privacy [19.017342515321918]
Differentially Private Federated Learning (DP-FL) has garnered attention as a collaborative machine learning approach that ensures formal privacy.
We present Uldp-FL, a novel FL framework designed to guarantee user-level DP in cross-silo FL where a single user's data may belong to multiple silos.
arXiv Detail & Related papers (2023-08-23T15:50:51Z) - Privacy Preserving Bayesian Federated Learning in Heterogeneous Settings [20.33482170846688]
This paper presents a unified federated learning framework based on customized local Bayesian models that learn well even in the absence of large local datasets.
We use priors in the functional (output) space of the networks to facilitate collaboration across heterogeneous clients.
Experiments on standard FL datasets demonstrate that our approach outperforms strong baselines in both homogeneous and heterogeneous settings.
arXiv Detail & Related papers (2023-06-13T17:55:30Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.