Personalized Online Federated Learning with Multiple Kernels
- URL: http://arxiv.org/abs/2311.05108v1
- Date: Thu, 9 Nov 2023 02:51:37 GMT
- Title: Personalized Online Federated Learning with Multiple Kernels
- Authors: Pouya M. Ghari, Yanning Shen
- Abstract summary: Federated learning enables a group of learners (called clients) to train an MKL model on the data distributed among clients.
The present paper develops an algorithmic framework to enable clients to communicate with the server.
We prove that using the proposed online federated MKL algorithm, each client enjoys sub-linear regret with respect to the RF approximation of its best kernel in hindsight.
- Score: 26.823435733330705
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Multi-kernel learning (MKL) exhibits well-documented performance in online
non-linear function approximation. Federated learning enables a group of
learners (called clients) to train an MKL model on the data distributed among
clients to perform online non-linear function approximation. There are some
challenges in online federated MKL that need to be addressed: i) Communication
efficiency especially when a large number of kernels are considered ii)
Heterogeneous data distribution among clients. The present paper develops an
algorithmic framework to enable clients to communicate with the server to send
their updates with affordable communication cost while clients employ a large
dictionary of kernels. Utilizing random feature (RF) approximation, the present
paper proposes scalable online federated MKL algorithm. We prove that using the
proposed online federated MKL algorithm, each client enjoys sub-linear regret
with respect to the RF approximation of its best kernel in hindsight, which
indicates that the proposed algorithm can effectively deal with heterogeneity
of the data distributed among clients. Experimental results on real datasets
showcase the advantages of the proposed algorithm compared with other online
federated kernel learning ones.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Federated Learning Can Find Friends That Are Advantageous [14.993730469216546]
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
We introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective.
arXiv Detail & Related papers (2024-02-07T17:46:37Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Federated cINN Clustering for Accurate Clustered Federated Learning [33.72494731516968]
Federated Learning (FL) presents an innovative approach to privacy-preserving distributed machine learning.
We propose the Federated cINN Clustering Algorithm (FCCA) to robustly cluster clients into different groups.
arXiv Detail & Related papers (2023-09-04T10:47:52Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Graph-Aided Online Multi-Kernel Learning [12.805267089186533]
This paper studies data-driven selection of kernels from the dictionary that provide satisfactory function approximations.
Based on the similarities among kernels, the novel framework constructs and refines a graph to assist choosing a subset of kernels.
Our proposed algorithms enjoy tighter sub-linear regret bound compared with state-of-art graph-based online MKL alternatives.
arXiv Detail & Related papers (2021-02-09T07:43:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.