A Family of Hybrid Federated and Centralized Learning Architectures in
Machine Learning
- URL: http://arxiv.org/abs/2105.03288v1
- Date: Fri, 7 May 2021 14:28:33 GMT
- Title: A Family of Hybrid Federated and Centralized Learning Architectures in
Machine Learning
- Authors: Ahmet M. Elbir and Sinem Coleri
- Abstract summary: We propose hybrid federated and centralized learning (HFCL) for machine learning tasks.
In FL, only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them.
The HFCL frameworks outperform FL with up to $20%$ improvement in the learning accuracy when only half of the clients perform FL while having $50%$ less communication overhead than CL.
- Score: 7.99536002595393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many of the machine learning tasks focus on centralized learning (CL), which
requires the transmission of local datasets from the clients to a parameter
server (PS) entailing huge communication overhead. To overcome this, federated
learning (FL) has been a promising tool, wherein the clients send only the
model updates to the PS instead of the whole dataset. However, FL demands
powerful computational resources from the clients. Therefore, not all the
clients can participate in training if they do not have enough computational
resources. To address this issue, we introduce a more practical approach called
hybrid federated and centralized learning (HFCL), wherein only the clients with
sufficient resources employ FL, while the remaining ones send their datasets to
the PS, which computes the model on behalf of them. Then, the model parameters
corresponding to all clients are aggregated at the PS. To improve the
efficiency of dataset transmission, we propose two different techniques:
increased computation-per-client and sequential data transmission. The HFCL
frameworks outperform FL with up to $20\%$ improvement in the learning accuracy
when only half of the clients perform FL while having $50\%$ less communication
overhead than CL since all the clients collaborate on the learning process with
their datasets.
Related papers
- Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning [56.21666819468249]
Resource constraints of clients and communication costs pose major problems for training large models in Federated Learning.
We introduce Sparse-ProxSkip, which combines training and acceleration in a sparse setting.
We demonstrate the good performance of Sparse-ProxSkip in extensive experiments.
arXiv Detail & Related papers (2024-05-31T05:21:12Z) - Training Heterogeneous Client Models using Knowledge Distillation in
Serverless Federated Learning [0.5510212613486574]
Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients.
Recent works on designing systems for efficient FL have shown that utilizing serverless computing technologies can enhance resource efficiency, reduce training costs, and alleviate the complex infrastructure management burden on data holders.
arXiv Detail & Related papers (2024-02-11T20:15:52Z) - FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Personalized Federated Learning with Multi-branch Architecture [0.0]
Federated learning (FL) enables multiple clients to collaboratively train models without requiring clients to reveal their raw data to each other.
We propose a new PFL method (pFedMB) using multi-branch architecture, which achieves personalization by splitting each layer of a neural network into multiple branches and assigning client-specific weights to each branch.
We experimentally show that pFedMB performs better than the state-of-the-art PFL methods using the CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2022-11-15T06:30:57Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Aergia: Leveraging Heterogeneity in Federated Learning Systems [5.0650178943079]
Federated Learning (FL) relies on clients to update a global model using their local datasets.
Aergia is a novel approach where slow clients freeze the part of their model that is the most computationally intensive to train.
Aergia significantly reduces the training time under heterogeneous settings by up to 27% and 53% compared to FedAvg and TiFL, respectively.
arXiv Detail & Related papers (2022-10-12T12:59:18Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Hybrid Federated and Centralized Learning [25.592568132720157]
Federated learning (FL) allows the clients to send only the model updates to the PS instead of the whole dataset.
In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side.
We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model.
arXiv Detail & Related papers (2020-11-13T13:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.