Advancing Personalized Federated Learning: Group Privacy, Fairness, and
Beyond
- URL: http://arxiv.org/abs/2309.00416v1
- Date: Fri, 1 Sep 2023 12:20:19 GMT
- Title: Advancing Personalized Federated Learning: Group Privacy, Fairness, and
Beyond
- Authors: Filippo Galli, Kangsoo Jung, Sayan Biswas, Catuscia Palamidessi,
Tommaso Cucinotta
- Abstract summary: Federated learning (FL) is a framework for training machine learning models in a distributed and collaborative manner.
In this paper, we address the triadic interaction among personalization, privacy guarantees, and fairness attained by models trained within the FL framework.
A method is put forth that introduces group privacy assurances through the utilization of $d$-privacy.
- Score: 6.731000738818571
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a framework for training machine learning models
in a distributed and collaborative manner. During training, a set of
participating clients process their data stored locally, sharing only the model
updates obtained by minimizing a cost function over their local inputs. FL was
proposed as a stepping-stone towards privacy-preserving machine learning, but
it has been shown vulnerable to issues such as leakage of private information,
lack of personalization of the model, and the possibility of having a trained
model that is fairer to some groups than to others. In this paper, we address
the triadic interaction among personalization, privacy guarantees, and fairness
attained by models trained within the FL framework. Differential privacy and
its variants have been studied and applied as cutting-edge standards for
providing formal privacy guarantees. However, clients in FL often hold very
diverse datasets representing heterogeneous communities, making it important to
protect their sensitive information while still ensuring that the trained model
upholds the aspect of fairness for the users. To attain this objective, a
method is put forth that introduces group privacy assurances through the
utilization of $d$-privacy (aka metric privacy). $d$-privacy represents a
localized form of differential privacy that relies on a metric-oriented
obfuscation approach to maintain the original data's topological distribution.
This method, besides enabling personalized model training in a federated
approach and providing formal privacy guarantees, possesses significantly
better group fairness measured under a variety of standard metrics than a
global model trained within a classical FL template. Theoretical justifications
for the applicability are provided, as well as experimental validation on
real-world datasets to illustrate the working of the proposed method.
Related papers
- FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Fair Differentially Private Federated Learning Framework [0.0]
Federated learning (FL) is a distributed machine learning strategy that enables participants to collaborate and train a shared model without sharing their individual datasets.
Privacy and fairness are crucial considerations in FL.
This paper presents a framework that addresses the challenges of generating a fair global model without validation data and creating a globally private differential model.
arXiv Detail & Related papers (2023-05-23T09:58:48Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Personalized PATE: Differential Privacy for Machine Learning with
Individual Privacy Guarantees [1.2691047660244335]
We propose three novel methods to support training an ML model with different personalized privacy guarantees within the training data.
Our experiments show that our personalized privacy methods yield higher accuracy models than the non-personalized baseline.
arXiv Detail & Related papers (2022-02-21T20:16:27Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - MACE: A Flexible Framework for Membership Privacy Estimation in
Generative Models [14.290199072565162]
We propose the first formal framework for membership privacy estimation in generative models.
Compared to previous works, our framework makes more realistic and flexible assumptions.
arXiv Detail & Related papers (2020-09-11T23:15:05Z) - Federating Recommendations Using Differentially Private Prototypes [16.29544153550663]
We propose a new federated approach to learning global and local private models for recommendation without collecting raw data.
By requiring only two rounds of communication, we both reduce the communication costs and avoid the excessive privacy loss.
We show local adaptation of the global model allows our method to outperform centralized matrix-factorization-based recommender system models.
arXiv Detail & Related papers (2020-03-01T22:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.