Decentralized Personalized Online Federated Learning
- URL: http://arxiv.org/abs/2311.04817v1
- Date: Wed, 8 Nov 2023 16:42:10 GMT
- Title: Decentralized Personalized Online Federated Learning
- Authors: Renzhi Wu and Saayan Mitra and Xiang Chen and Anup Rao
- Abstract summary: Vanilla federated learning does not support learning in an online environment, learning a personalized model on each client, and learning in a decentralized setting.
We propose a new learning setting textitDecentralized Personalized Online Federated Learning that considers all the three aspects at the same time.
We verify the effectiveness and robustness of our proposed method on three real-world item recommendation datasets and one air quality prediction dataset.
- Score: 13.76896613426515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vanilla federated learning does not support learning in an online
environment, learning a personalized model on each client, and learning in a
decentralized setting. There are existing methods extending federated learning
in each of the three aspects. However, some important applications on
enterprise edge servers (e.g. online item recommendation at global scale)
involve the three aspects at the same time. Therefore, we propose a new
learning setting \textit{Decentralized Personalized Online Federated Learning}
that considers all the three aspects at the same time.
In this new setting for learning, the first technical challenge is how to
aggregate the shared model parameters from neighboring clients to obtain a
personalized local model with good performance on each client. We propose to
directly learn an aggregation by optimizing the performance of the local model
with respect to the aggregation weights. This not only improves personalization
of each local model but also helps the local model adapting to potential data
shift by intelligently incorporating the right amount of information from its
neighbors. The second challenge is how to select the neighbors for each client.
We propose a peer selection method based on the learned aggregation weights
enabling each client to select the most helpful neighbors and reduce
communication cost at the same time. We verify the effectiveness and robustness
of our proposed method on three real-world item recommendation datasets and one
air quality prediction dataset.
Related papers
- Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - PeFLL: Personalized Federated Learning by Learning to Learn [16.161876130822396]
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects.
At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork.
arXiv Detail & Related papers (2023-06-08T19:12:42Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - Subspace Learning for Personalized Federated Optimization [7.475183117508927]
We propose a method to address the problem of personalized learning in AI systems.
We show that our method achieves consistent gains both in personalized and unseen client evaluation settings.
arXiv Detail & Related papers (2021-09-16T00:03:23Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Unifying Distillation with Personalization in Federated Learning [1.8262547855491458]
Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data.
In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients.
In this paper, we address this problem with PersFL, a two-stage personalized learning algorithm.
In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from
arXiv Detail & Related papers (2021-05-31T17:54:29Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Information-Theoretic Bounds on the Generalization Error and Privacy
Leakage in Federated Learning [96.38757904624208]
Machine learning algorithms on mobile networks can be characterized into three different categories.
The main objective of this work is to provide an information-theoretic framework for all of the aforementioned learning paradigms.
arXiv Detail & Related papers (2020-05-05T21:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.