Personalized Federated Learning: A Meta-Learning Approach
- URL: http://arxiv.org/abs/2002.07948v4
- Date: Fri, 23 Oct 2020 03:04:01 GMT
- Title: Personalized Federated Learning: A Meta-Learning Approach
- Authors: Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
- Abstract summary: In Federated Learning, we aim to train models across multiple computing units (users)
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
- Score: 28.281166755509886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Federated Learning, we aim to train models across multiple computing units
(users), while users can only communicate with a common central server, without
exchanging their data samples. This mechanism exploits the computational power
of all users and allows users to obtain a richer model as their models are
trained over a larger set of data points. However, this scheme only develops a
common output for all the users, and, therefore, it does not adapt the model to
each user. This is an important missing feature, especially given the
heterogeneity of the underlying data distribution for various users. In this
paper, we study a personalized variant of the federated learning in which our
goal is to find an initial shared model that current or new users can easily
adapt to their local dataset by performing one or a few steps of gradient
descent with respect to their own data. This approach keeps all the benefits of
the federated learning architecture, and, by structure, leads to a more
personalized model for each user. We show this problem can be studied within
the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection,
we study a personalized variant of the well-known Federated Averaging algorithm
and evaluate its performance in terms of gradient norm for non-convex loss
functions. Further, we characterize how this performance is affected by the
closeness of underlying distributions of user data, measured in terms of
distribution distances such as Total Variation and 1-Wasserstein metric.
Related papers
- MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Dirichlet-based Uncertainty Quantification for Personalized Federated
Learning with Improved Posterior Networks [9.54563359677778]
This paper presents a new approach to federated learning that allows selecting a model from global and personalized ones.
It is achieved through a careful modeling of predictive uncertainties that helps to detect local and global in- and out-of-distribution data.
The comprehensive experimental evaluation on the popular real-world image datasets shows the superior performance of the model in the presence of out-of-distribution data.
arXiv Detail & Related papers (2023-12-18T14:30:05Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Federated Mixture of Experts [94.25278695272874]
FedMix is a framework that allows us to train an ensemble of specialized models.
We show that users with similar data characteristics select the same members and therefore share statistical strength.
arXiv Detail & Related papers (2021-07-14T14:15:24Z) - Meta-HAR: Federated Representation Learning for Human Activity
Recognition [21.749861229805727]
Human activity recognition (HAR) based on mobile sensors plays an important role in ubiquitous computing.
We propose Meta-HAR, a federated representation learning framework, in which a signal embedding network is meta-learned in a federated manner.
In order to boost the representation ability of the embedding network, we treat the HAR problem at each user as a different task and train the shared embedding network through a Model-Agnostic Meta-learning framework.
arXiv Detail & Related papers (2021-05-31T11:04:39Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.