SplitGP: Achieving Both Generalization and Personalization in Federated
Learning
- URL: http://arxiv.org/abs/2212.08343v1
- Date: Fri, 16 Dec 2022 08:37:24 GMT
- Title: SplitGP: Achieving Both Generalization and Personalization in Federated
Learning
- Authors: Dong-Jun Han, Do-Yeon Kim, Minseok Choi, Christopher G. Brinton,
Jaekyun Moon
- Abstract summary: SplitGP captures generalization and personalization capabilities for efficient inference across resource-constrained clients.
We analytically characterize the convergence behavior of SplitGP, revealing that all client models approach stationary pointsally.
Experimental results show that SplitGP outperforms existing baselines by wide margins in inference time and test accuracy for varying amounts of out-of-distribution samples.
- Score: 31.105681433459285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fundamental challenge to providing edge-AI services is the need for a
machine learning (ML) model that achieves personalization (i.e., to individual
clients) and generalization (i.e., to unseen data) properties concurrently.
Existing techniques in federated learning (FL) have encountered a steep
tradeoff between these objectives and impose large computational requirements
on edge devices during training and inference. In this paper, we propose
SplitGP, a new split learning solution that can simultaneously capture
generalization and personalization capabilities for efficient inference across
resource-constrained clients (e.g., mobile/IoT devices). Our key idea is to
split the full ML model into client-side and server-side components, and impose
different roles to them: the client-side model is trained to have strong
personalization capability optimized to each client's main task, while the
server-side model is trained to have strong generalization capability for
handling all clients' out-of-distribution tasks. We analytically characterize
the convergence behavior of SplitGP, revealing that all client models approach
stationary points asymptotically. Further, we analyze the inference time in
SplitGP and provide bounds for determining model split ratios. Experimental
results show that SplitGP outperforms existing baselines by wide margins in
inference time and test accuracy for varying amounts of out-of-distribution
samples.
Related papers
- FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients [13.98392319567057]
Federated Learning (FL) is a distributed machine learning paradigm that achieves a globally robust model through decentralized computation and periodic model synthesis.
Despite their wide adoption, existing FL and PFL works have yet to comprehensively address the class-imbalance issue.
We propose FedReMa, an efficient PFL algorithm that can tackle class-imbalance by utilizing an adaptive inter-client co-learning approach.
arXiv Detail & Related papers (2024-11-04T05:44:28Z) - Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - Personalized Federated Learning via Sequential Layer Expansion in Representation Learning [0.0]
Federated learning ensures the privacy of clients by conducting distributed training on individual client devices and sharing only the model weights with a central server.
We propose a new representation learning-based approach that suggests decoupling the entire deep learning model into more densely divided parts with the application of suitable scheduling methods.
arXiv Detail & Related papers (2024-04-27T06:37:19Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Personalized Federated Learning by Structured and Unstructured Pruning
under Data Heterogeneity [3.291862617649511]
We propose a new approach for obtaining a personalized model from a client-level objective.
To realize this personalization, we leverage finding a small subnetwork for each client.
arXiv Detail & Related papers (2021-05-02T22:10:46Z) - QuPeL: Quantized Personalization with Applications to Federated Learning [8.420943739336067]
In this work, we introduce a textitquantized and textitpersonalized FL algorithm QuPeL that facilitates collective training with heterogeneous clients.
For personalization, we allow clients to learn textitcompressed personalized models with different quantization parameters depending on their resources.
Numerically, we show that optimizing over the quantization levels increases the performance and we validate that QuPeL outperforms both FedAvg and local training of clients in a heterogeneous setting.
arXiv Detail & Related papers (2021-02-23T16:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.