Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach
- URL: http://arxiv.org/abs/2402.18018v1
- Date: Wed, 28 Feb 2024 03:27:10 GMT
- Title: Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach
- Authors: Bin Wang and Jun Fang and Hongbin Li and Yonina C. Eldar
- Abstract summary: Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
- Score: 67.27031215756121
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a machine learning paradigm that targets model
training without gathering the local data dispersed over various data sources.
Standard FL, which employs a single server, can only support a limited number
of users, leading to degraded learning capability. In this work, we consider a
multi-server FL framework, referred to as \emph{Confederated Learning} (CFL),
in order to accommodate a larger number of users. A CFL system is composed of
multiple networked edge servers, with each server connected to an individual
set of users. Decentralized collaboration among servers is leveraged to harness
all users' data for model training. Due to the potentially massive number of
users involved, it is crucial to reduce the communication overhead of the CFL
system. We propose a stochastic gradient method for distributed learning in the
CFL framework. The proposed method incorporates a conditionally-triggered user
selection (CTUS) mechanism as the central component to effectively reduce
communication overhead. Relying on a delicately designed triggering condition,
the CTUS mechanism allows each server to select only a small number of users to
upload their gradients, without significantly jeopardizing the convergence
performance of the algorithm. Our theoretical analysis reveals that the
proposed algorithm enjoys a linear convergence rate. Simulation results show
that it achieves substantial improvement over state-of-the-art algorithms in
terms of communication efficiency.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Scheduling and Communication Schemes for Decentralized Federated
Learning [0.31410859223862103]
A decentralized federated learning (DFL) model with the gradient descent (SGD) algorithm has been introduced.
Three scheduling policies for DFL have been proposed for communications between the clients and the parallel servers.
Results show that the proposed scheduling polices have an impact both on the speed of convergence and in the final global model.
arXiv Detail & Related papers (2023-11-27T17:35:28Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Fast-Convergent Federated Learning via Cyclic Aggregation [10.658882342481542]
Federated learning (FL) aims at optimizing a shared global model over multiple edge devices without transmitting (private) data to the central server.
This paper utilizes cyclic learning rate at the server side to reduce the number of training iterations with increased performance.
Numerical results validate that, simply plugging-in the proposed cyclic aggregation to the existing FL algorithms effectively reduces the number of training iterations with improved performance.
arXiv Detail & Related papers (2022-10-29T07:20:59Z) - Time Minimization in Hierarchical Federated Learning [11.678121177730718]
Federated learning is a modern decentralized machine learning technique where user equipments perform machine learning tasks locally and then upload the model parameters to a central server.
In this paper, we consider a 3-layer hierarchical federated learning system which involves model parameter exchanges between the cloud and edge servers.
arXiv Detail & Related papers (2022-10-07T13:53:20Z) - Confederated Learning: Federated Learning with Decentralized Edge
Servers [42.766372620288585]
Federated learning (FL) is an emerging machine learning paradigm that allows to accomplish model training without aggregating data at a central server.
We propose a ConFederated Learning (CFL) framework, in which each server is connected with an individual set of devices.
The proposed algorithm employs a random scheduling policy which randomly selects a subset of devices to access their respective servers at each iteration.
arXiv Detail & Related papers (2022-05-30T07:56:58Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.