FedEmbed: Personalized Private Federated Learning
- URL: http://arxiv.org/abs/2202.09472v1
- Date: Fri, 18 Feb 2022 23:35:06 GMT
- Title: FedEmbed: Personalized Private Federated Learning
- Authors: Andrew Silva, Katherine Metcalf, Nicholas Apostoloff, Barry-John
Theobald
- Abstract summary: We present FedEmbed, a new approach to private federated learning for personalizing a global model.
We show that FedEmbed achieves up to 45% improvement over baseline approaches to personalized private federated learning.
- Score: 13.356624498247069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables the deployment of machine learning to problems for
which centralized data collection is impractical. Adding differential privacy
guarantees bounds on privacy while data are contributed to a global model.
Adding personalization to federated learning introduces new challenges as we
must account for preferences of individual users, where a data sample could
have conflicting labels because one sub-population of users might view an input
positively, but other sub-populations view the same input negatively. We
present FedEmbed, a new approach to private federated learning for
personalizing a global model that uses (1) sub-populations of similar users,
and (2) personal embeddings. We demonstrate that current approaches to
federated learning are inadequate for handling data with conflicting labels,
and we show that FedEmbed achieves up to 45% improvement over baseline
approaches to personalized private federated learning.
Related papers
- Towards Split Learning-based Privacy-Preserving Record Linkage [49.1574468325115]
Split Learning has been introduced to facilitate applications where user data privacy is a requirement.
In this paper, we investigate the potentials of Split Learning for Privacy-Preserving Record Matching.
arXiv Detail & Related papers (2024-09-02T09:17:05Z) - Addressing Skewed Heterogeneity via Federated Prototype Rectification with Personalization [35.48757125452761]
Federated learning is an efficient framework designed to facilitate collaborative model training across multiple distributed devices.
A significant challenge of federated learning is data-level heterogeneity, i.e., skewed or long-tailed distribution of private data.
We propose a novel Federated Prototype Rectification with Personalization which consists of two parts: Federated Personalization and Federated Prototype Rectification.
arXiv Detail & Related papers (2024-08-15T06:26:46Z) - Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - Federated Learning Empowered by Generative Content [55.576885852501775]
Federated learning (FL) enables leveraging distributed private data for model training in a privacy-preserving way.
We propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
We conduct a systematic empirical study on FedGC, covering diverse baselines, datasets, scenarios, and modalities.
arXiv Detail & Related papers (2023-12-10T07:38:56Z) - Little is Enough: Improving Privacy by Sharing Labels in Federated Semi-Supervised Learning [10.972006295280636]
In many critical applications, sensitive data is inherently distributed and cannot be centralized due to privacy concerns.
Most of these approaches either share local model parameters, soft predictions on a public dataset, or a combination of both.
This, however, still discloses private information and restricts local models to those that lend themselves to training via gradient-based methods.
We propose to share only hard labels on a public unlabeled dataset, and use a consensus over the shared labels as a pseudo-labeling to be used by clients.
arXiv Detail & Related papers (2023-10-09T13:16:10Z) - Incentivising the federation: gradient-based metrics for data selection and valuation in private decentralised training [15.233103072063951]
We investigate how to leverage gradient information to permit the participants of private training settings to select the data most beneficial for the jointly trained model.
We show that these techniques can provide the federated clients with tools for principled data selection even in stricter privacy settings.
arXiv Detail & Related papers (2023-05-04T15:44:56Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - FedPC: Federated Learning for Language Generation with Personal and
Context Preference Embeddings [10.235620939242505]
Federated learning is a training paradigm that learns from multiple distributed users without aggregating data on a centralized server.
We propose a new direction for personalization research within federated learning, leveraging both personal embeddings and shared context embeddings.
We present an approach to predict these preference'' embeddings, enabling personalization without backpropagation.
arXiv Detail & Related papers (2022-10-07T18:01:19Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.