FedPC: Federated Learning for Language Generation with Personal and
Context Preference Embeddings
- URL: http://arxiv.org/abs/2210.03766v1
- Date: Fri, 7 Oct 2022 18:01:19 GMT
- Title: FedPC: Federated Learning for Language Generation with Personal and
Context Preference Embeddings
- Authors: Andrew Silva, Pradyumna Tambwekar, Matthew Gombolay
- Abstract summary: Federated learning is a training paradigm that learns from multiple distributed users without aggregating data on a centralized server.
We propose a new direction for personalization research within federated learning, leveraging both personal embeddings and shared context embeddings.
We present an approach to predict these preference'' embeddings, enabling personalization without backpropagation.
- Score: 10.235620939242505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a training paradigm that learns from multiple
distributed users without aggregating data on a centralized server. Such a
paradigm promises the ability to deploy machine-learning at-scale to a diverse
population of end-users without first collecting a large, labeled dataset for
all possible tasks. As federated learning typically averages learning updates
across a decentralized population, there is a growing need for personalization
of federated learning systems (i.e conversational agents must be able to
personalize to a specific user's preferences). In this work, we propose a new
direction for personalization research within federated learning, leveraging
both personal embeddings and shared context embeddings. We also present an
approach to predict these ``preference'' embeddings, enabling personalization
without backpropagation. Compared to state-of-the-art personalization
baselines, our approach achieves a 50\% improvement in test-time perplexity
using 0.001\% of the memory required by baseline approaches, and achieving
greater sample- and compute-efficiency.
Related papers
- Personalized Federated Learning via Sequential Layer Expansion in Representation Learning [0.0]
Federated learning ensures the privacy of clients by conducting distributed training on individual client devices and sharing only the model weights with a central server.
We propose a new representation learning-based approach that suggests decoupling the entire deep learning model into more densely divided parts with the application of suitable scheduling methods.
arXiv Detail & Related papers (2024-04-27T06:37:19Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - FedEmbed: Personalized Private Federated Learning [13.356624498247069]
We present FedEmbed, a new approach to private federated learning for personalizing a global model.
We show that FedEmbed achieves up to 45% improvement over baseline approaches to personalized private federated learning.
arXiv Detail & Related papers (2022-02-18T23:35:06Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Subspace Learning for Personalized Federated Optimization [7.475183117508927]
We propose a method to address the problem of personalized learning in AI systems.
We show that our method achieves consistent gains both in personalized and unseen client evaluation settings.
arXiv Detail & Related papers (2021-09-16T00:03:23Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Personalized Federated Learning: A Meta-Learning Approach [28.281166755509886]
In Federated Learning, we aim to train models across multiple computing units (users)
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
arXiv Detail & Related papers (2020-02-19T01:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.