Confederated Learning: Federated Learning with Decentralized Edge
Servers
- URL: http://arxiv.org/abs/2205.14905v1
- Date: Mon, 30 May 2022 07:56:58 GMT
- Title: Confederated Learning: Federated Learning with Decentralized Edge
Servers
- Authors: Bin Wang, Jun Fang, Hongbin Li, Xiaojun Yuan, and Qing Ling
- Abstract summary: Federated learning (FL) is an emerging machine learning paradigm that allows to accomplish model training without aggregating data at a central server.
We propose a ConFederated Learning (CFL) framework, in which each server is connected with an individual set of devices.
The proposed algorithm employs a random scheduling policy which randomly selects a subset of devices to access their respective servers at each iteration.
- Score: 42.766372620288585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging machine learning paradigm that allows
to accomplish model training without aggregating data at a central server. Most
studies on FL consider a centralized framework, in which a single server is
endowed with a central authority to coordinate a number of devices to perform
model training in an iterative manner. Due to stringent communication and
bandwidth constraints, such a centralized framework has limited scalability as
the number of devices grows. To address this issue, in this paper, we propose a
ConFederated Learning (CFL) framework. The proposed CFL consists of multiple
servers, in which each server is connected with an individual set of devices as
in the conventional FL framework, and decentralized collaboration is leveraged
among servers to make full use of the data dispersed throughout the network. We
develop an alternating direction method of multipliers (ADMM) algorithm for
CFL. The proposed algorithm employs a random scheduling policy which randomly
selects a subset of devices to access their respective servers at each
iteration, thus alleviating the need of uploading a huge amount of information
from devices to servers. Theoretical analysis is presented to justify the
proposed method. Numerical results show that the proposed method can converge
to a decent solution significantly faster than gradient-based FL algorithms,
thus boasting a substantial advantage in terms of communication efficiency.
Related papers
- Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - ESFL: Efficient Split Federated Learning over Resource-Constrained Heterogeneous Wireless Devices [22.664980594996155]
Federated learning (FL) allows multiple parties (distributed devices) to train a machine learning model without sharing raw data.
We propose an efficient split federated learning algorithm (ESFL) to take full advantage of the powerful computing capabilities at a central server.
arXiv Detail & Related papers (2024-02-24T20:50:29Z) - Scheduling and Communication Schemes for Decentralized Federated
Learning [0.31410859223862103]
A decentralized federated learning (DFL) model with the gradient descent (SGD) algorithm has been introduced.
Three scheduling policies for DFL have been proposed for communications between the clients and the parallel servers.
Results show that the proposed scheduling polices have an impact both on the speed of convergence and in the final global model.
arXiv Detail & Related papers (2023-11-27T17:35:28Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.