Hybrid Federated and Centralized Learning
- URL: http://arxiv.org/abs/2011.06892v2
- Date: Mon, 15 Feb 2021 20:28:58 GMT
- Title: Hybrid Federated and Centralized Learning
- Authors: Ahmet M. Elbir, Sinem Coleri, Kumar Vijay Mishra
- Abstract summary: Federated learning (FL) allows the clients to send only the model updates to the PS instead of the whole dataset.
In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side.
We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model.
- Score: 25.592568132720157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many of the machine learning (ML) tasks are focused on centralized learning
(CL), which requires the transmission of local datasets from the clients to a
parameter server (PS) leading to a huge communication overhead. Federated
learning (FL) overcomes this issue by allowing the clients to send only the
model updates to the PS instead of the whole dataset. In this way, FL brings
the learning to edge level, wherein powerful computational resources are
required on the client side. This requirement may not always be satisfied
because of diverse computational capabilities of edge devices. We address this
through a novel hybrid federated and centralized learning (HFCL) framework to
effectively train a learning model by exploiting the computational capability
of the clients. In HFCL, only the clients who have sufficient resources employ
FL; the remaining clients resort to CL by transmitting their local dataset to
PS. This allows all the clients to collaborate on the learning process
regardless of their computational resources. We also propose a sequential data
transmission approach with HFCL (HFCL-SDT) to reduce the training duration. The
proposed HFCL frameworks outperform previously proposed non-hybrid FL (CL)
based schemes in terms of learning accuracy (communication overhead) since all
the clients collaborate on the learning process with their datasets regardless
of their computational resources.
Related papers
- Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning [56.21666819468249]
Resource constraints of clients and communication costs pose major problems for training large models in Federated Learning.
We introduce Sparse-ProxSkip, which combines training and acceleration in a sparse setting.
We demonstrate the good performance of Sparse-ProxSkip in extensive experiments.
arXiv Detail & Related papers (2024-05-31T05:21:12Z) - HierSFL: Local Differential Privacy-aided Split Federated Learning in
Mobile Edge Computing [7.180235086275924]
Federated Learning is a promising approach for learning from user data while preserving data privacy.
Split Federated Learning is utilized, where clients upload their intermediate model training outcomes to a cloud server for collaborative server-client model training.
This methodology facilitates resource-constrained clients' participation in model training but also increases the training time and communication overhead.
We propose a novel algorithm, called Hierarchical Split Federated Learning (HierSFL), that amalgamates models at the edge and cloud phases.
arXiv Detail & Related papers (2024-01-16T09:34:10Z) - FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - ON-DEMAND-FL: A Dynamic and Efficient Multi-Criteria Federated Learning
Client Deployment Scheme [37.099990745974196]
We introduce an On-Demand-FL, a client deployment approach for federated learning.
We make use of containerization technology such as Docker to build efficient environments.
The Genetic algorithm (GA) is used to solve the multi-objective optimization problem.
arXiv Detail & Related papers (2022-11-05T13:41:19Z) - DReS-FL: Dropout-Resilient Secure Federated Learning for Non-IID Clients
via Secret Data Sharing [7.573516684862637]
Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.
This paper proposes a Dropout-Resilient Secure Federated Learning framework based on Lagrange computing.
We show that DReS-FL is resilient to client dropouts and provides privacy protection for the local datasets.
arXiv Detail & Related papers (2022-10-06T05:04:38Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - A Family of Hybrid Federated and Centralized Learning Architectures in
Machine Learning [7.99536002595393]
We propose hybrid federated and centralized learning (HFCL) for machine learning tasks.
In FL, only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them.
The HFCL frameworks outperform FL with up to $20%$ improvement in the learning accuracy when only half of the clients perform FL while having $50%$ less communication overhead than CL.
arXiv Detail & Related papers (2021-05-07T14:28:33Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.