Federated Learning While Providing Model as a Service: Joint Training
and Inference Optimization
- URL: http://arxiv.org/abs/2312.12863v2
- Date: Thu, 21 Dec 2023 06:30:46 GMT
- Title: Federated Learning While Providing Model as a Service: Joint Training
and Inference Optimization
- Authors: Pengchao Han, Shiqiang Wang, Yang Jiao, Jianwei Huang
- Abstract summary: Federated learning is beneficial for enabling the training of models across distributed clients.
Existing work has overlooked the coexistence of model training and inference under clients' limited resources.
This paper focuses on the joint optimization of model training and inference to maximize inference performance at clients.
- Score: 30.305956110710266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While providing machine learning model as a service to process users'
inference requests, online applications can periodically upgrade the model
utilizing newly collected data. Federated learning (FL) is beneficial for
enabling the training of models across distributed clients while keeping the
data locally. However, existing work has overlooked the coexistence of model
training and inference under clients' limited resources. This paper focuses on
the joint optimization of model training and inference to maximize inference
performance at clients. Such an optimization faces several challenges. The
first challenge is to characterize the clients' inference performance when
clients may partially participate in FL. To resolve this challenge, we
introduce a new notion of age of model (AoM) to quantify client-side model
freshness, based on which we use FL's global model convergence error as an
approximate measure of inference performance. The second challenge is the tight
coupling among clients' decisions, including participation probability in FL,
model download probability, and service rates. Toward the challenges, we
propose an online problem approximation to reduce the problem complexity and
optimize the resources to balance the needs of model training and inference.
Experimental results demonstrate that the proposed algorithm improves the
average inference accuracy by up to 12%.
Related papers
- FedPAE: Peer-Adaptive Ensemble Learning for Asynchronous and Model-Heterogeneous Federated Learning [9.084674176224109]
Federated learning (FL) enables multiple clients with distributed data sources to collaboratively train a shared model without compromising data privacy.
We introduce Federated Peer-Adaptive Ensemble Learning (FedPAE), a fully decentralized pFL algorithm that supports model heterogeneity and asynchronous learning.
Our approach utilizes a peer-to-peer model sharing mechanism and ensemble selection to achieve a more refined balance between local and global information.
arXiv Detail & Related papers (2024-10-17T22:47:19Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization [34.354154518009956]
We present a novel framework for personalized Federated Learning (PFL)
PFL is a distributed learning scheme to train a shared model across clients.
We present a novel framework for PFL based on hierarchical modeling and variational inference.
arXiv Detail & Related papers (2023-05-21T20:12:27Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - An Efficiency-boosting Client Selection Scheme for Federated Learning
with Fairness Guarantee [36.07970788489]
Federated Learning is a new paradigm to cope with the privacy issue by allowing clients to perform model training locally.
The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness.
In this paper, we will model the fairness guaranteed client selection as a Lyapunov optimization problem and then a C2MAB-based method is proposed for estimation of the model exchange time.
arXiv Detail & Related papers (2020-11-03T15:27:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.